Rent-a-SuperComputer from Amazon
Written by Harry Fairhead   
Sunday, 20 November 2011

Amazon has now introduced an Extra Large instance of its cluster compute machine so big that it wins Amazon EC2 a place in the list of top 500 computers. With cluster computing you can now afford huge amounts of processing power.

The idea that Amazon EC2 provides a way of getting a lot of computer power for very little cash is well known. It has been suggested that bad guys use EC2 to crack passwords, digital signatures, and hashes in general. But being able to rent a super cluster big enough to rival the sort of thing that only governments or big business can afford is another step up.

Amazon has now introduced an Extra Large instance of its cluster compute machines. This is so big that it wins Amazon EC2 a place in the list of top 500 computers. The cc2.8xlarge instance is currently at number 42, It has 16 Xeon cores, 60.5GBytes of RAM and over 3TBytes of storage. You can configure it with either Linux or Windows Server and it costs just $2.40 an hour per instance. You could probably get some computing time for less cost by buying spot instances, i.e. using slack time on the EC2 cloud.

aws

However, this is just the start of the story because this is cluster computing and you can build bigger clusters by running more instances. Amazon has put together a 1064 instance (17024 cores) cluster of cc2.8xlarge instances was able to achieve 240.09 TeraFLOPS for the High Performance Linpack benchmark. (It is this configuration that secured it number 42 in the list of "big" computers.) Amazon quotes the cost of eight instances as $2.40 per hour so that implies you can have a supercompuer for around $320 per hour.

If this isn't big enough for you Cycle Computing claims to have put together 3809 instances to create a 30,472 processor monster called Nekomata, with 27TB of RAM and 2PB of storage. The cost is claimed to be $1279 per hour which is certainly reasonable in all senses. The cluster was spread across three EC2 data centers and used spot instances to keep the costs down. Nekomata certainly deserves a place in the top 500 supercomputers and it has been used for real jobs - computing molecular models.

So what does all this mean?

It means that if you want to run algorithms that previously needed a dedicated supercomputer you now don't have to invest the millions of dollars to build one first. It also means that, given the will and a much smaller amount of money, tasks that were thought to be beyond the reach of non-government organizations, such as decryption, are within the reach of anyone who can find a few thousand dollars.

On the positive side it lowers the bar for scientific computation in a way that should make it possible for smaller research groups to try out ideas that otherwise might never see the light of day.

Here is what Harvard Medical School think of using AWS as a super computer:

More Information:

http://aws.amazon.com/hpc

Announcing New Amazon EC2 Cluster Compute Instance

Cycle Computing

aws

Related articles:

EC2 GPU cracks passwords on the cheap

Amazon extends options for high performance cloud computing

 

To be informed about new articles on I Programmer, subscribe to the RSS feed, follow us on Google+, Twitter or Facebook or sign up for our weekly newsletter.


 

Banner


Sequin - Open Source Message Stream Built On Postgres
31/10/2024

Sequin is a tool for capturing changes and streaming data out of your Postgres database, guaranteeing exactly once processing. What does that mean?



Google Opensources Privacy Library
08/11/2024

Google is making a new differential privacy library available as open source. PipelineDP4J is a Java-based library that can be used to analyse data sets while preserving privacy.


More News


Last Updated ( Monday, 21 November 2011 )