Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
The Internet IT

Amazon EC2 Now More Ready for Application Hosting 149

For months now, I've been geeked about Amazon's EC2 as a web hosting service. But until today, in my opinion, it wasn't ready for prime time. Now it is, for two reasons. One, you can get static IPs, so if an outward-facing VM goes down you can quickly start another one and point your site's traffic to it without waiting for DNS propagation. And two, you can now separate your VMs into "physically distinct, independent infrastructure" zones, so you can plan to keep your site up if a tornado takes out one NOC. If I were developing a new website I'd host it there; buying or leasing real hardware for a startup seems silly. If you have questions, or especially if you know something about other companies' virtual hosting options, post comments -- let's compare notes.
This discussion has been archived. No new comments can be posted.

Amazon EC2 Now More Ready for Application Hosting

Comments Filter:
  • IPv6 (Score:4, Interesting)

    by rubeng ( 1263328 ) on Thursday March 27, 2008 @11:46AM (#22883202) Journal
    Nice, don't suppose there's any chance of IPv6 support - give each instance, running or not, a unique address.
    • Re:IPv6 (Score:4, Insightful)

      by tolan-b ( 230077 ) on Thursday March 27, 2008 @12:02PM (#22883386)
      I suspect only tunneled over IPv4.

      What I'm personally waiting for from EC2 is European datacentres, as I have an application that's latency sensitive. :(
      • Re: (Score:3, Informative)

        by nacturation ( 646836 ) *

        What I'm personally waiting for from EC2 is European datacentres, as I have an application that's latency sensitive. :(
        You can use Amazon's S3 Europe [amazon.com] for serving static files from their European datacentre.
         
        • Re: (Score:3, Insightful)

          by tolan-b ( 230077 )
          Yeah I need low latency to the server running the app. Hopefully the fact that they've opened a Euro datacentre for S3 is an indication they might do the same for EC2 though.
          • "More ready" is wonderfully relative.

            "Less unready" is just as accurate, and perhaps more precise.

            Without an SLA, EC2 or SimpleDB, or "Head in The Cloud" is an experimental platform.
            • by aminorex ( 141494 ) on Thursday March 27, 2008 @03:32PM (#22886070) Homepage Journal
              That depends a lot on the scale of your operation and the scale of your hosting service. The value of an SLA is that you can sue to recover damages in case of non-compliance. But it may not be possible to recover real damages in court: Your provider may not have pockets that deep, you may not have pockets as deep as your lawyers' thirst for money, and the law may not allow for full recovery in your circumstance.

              EC2 is up and stays up. Reliabilty counts for a lot more than legal recourse, in my book. SLAs don't create reliability, they *help* (hopefully) to create legal recourse, which is a very poor substitute.
  • by Animats ( 122034 )

    If you're using Amazon for hosting, you can't switch hosting services; their system is too nonstandard. Do you want to be in a position where they can raise prices or cut off your air supply?

    • by nacturation ( 646836 ) * <nacturation&gmail,com> on Thursday March 27, 2008 @11:56AM (#22883330) Journal

      If you're using Amazon for hosting, you can't switch hosting services; their system is too nonstandard. Do you want to be in a position where they can raise prices or cut off your air supply?
      EC2 allows you to setup your own servers on their infrastructure. Ultimately, this is as standard as getting a virtual or dedicated server at any one of thousands of other hosting providers. Switching is as easy as replicating the environment you've created for yourself (which is likely a standard LAMP stack anyways) and then doing a DNS change.
       
      • Re: (Score:3, Interesting)

        by Anonymous Coward
        Yes and no. Since it's not persistent, you have to set up some kind of backup/replication from day one -- S3 being the common choice here. My startup uses EC2+S3, and just getting a Linux image serving a webpage is completely standard (yay), but setting up all the replication and monitoring and whatnot that a real server actually needs is kind of a pain. You end up with a lot of EC2/S3-specific fun, at least on the administration side.

        As just one example, we don't do full backups, but rather have our ima
    • Re: (Score:3, Informative)

      it's pretty much a standard i386/PAE Xen image... I've not tried, but if you take a image of your filesystem, you should be able to move it to another Xen hosting provider that supports i386/PAE. Of course, most competitors don't have Amazons wiz-bang provisioning technology. Uh, not to whore out my own links, but I run a small Xen hosting provider [prgmr.com] (btw, ec2 kicks my ass when it comes to price per megabyte of ram) - and I (and I assume many of my competitors [xensource.com]) provide a read-only rescue image where you
      • Uh, not to whore out my own links, but I run a small Xen hosting provider
        Can you drop me a mail if you start offering FreeBSD domUs? I have a dedicated server running OpenBSD at the moment, but I'd be interested in a backup host.
    • by dogas ( 312359 ) on Thursday March 27, 2008 @12:33PM (#22883782) Homepage
      Your comment makes it apparent that you really don't understand how hosting a website works.

      My company uses EC2 (plus a few other amazon services, which I find to be spectacular) for hosting our application. If we wanted to move to another server or company or datacenter, it's just a matter of setting up the new server and repointing the DNS. Also what is nonstandard about their servers? You basically set them up however you want. You want to run linux? cool. FreeBSD? awesome. Basically you can run any *NIX clone you please. Fortunately lots of people provide excellent templates, so rolling your own is not really necessary.
      • Re: (Score:2, Informative)

        by mr_da3m0n ( 887821 )
        >You basically set them up however you want. You want to run linux? cool. FreeBSD? awesome. Basically you can run any *NIX clone you please.

        No you don't. You have to run Linux. And they pick the kernel. It runs on Xen after all.

        Also, why does everyone seems to ignore the fact that the virtual machines are automatically wiped/reset to base image state whenever they terminate?

        While inconvenient, their API is simply fantastic. My EC2 machines boot, add-remove certain components, and then deploy data from S3
  • I have a question: (Score:5, Insightful)

    by Megaweapon ( 25185 ) on Thursday March 27, 2008 @11:50AM (#22883240) Homepage
    Is this a Slashvertisement?
    • by jamie ( 78724 ) * Works for Slashdot <jamie@slashdot.org> on Thursday March 27, 2008 @12:01PM (#22883372) Journal

      Just in case you were serious... :)

      Slashdot, and the company that runs it Sourceforge Inc., aren't using Amazon Web Services for anything that I know of. Slashdot runs on real hardware, not VMs, and we're not planning on changing that anytime soon. I don't know anyone using AWS, which is part of why I'm looking for Slashdot reader feedback. My experience with it is limited to starting up some instances and playing around with installing Apache to see how it all works, and I did that on my own nickel. I chatted with someone at Amazon about AWS last year, but I didn't sign an NDA so I learned about today's news through their public mailing list.

      • Re: (Score:3, Interesting)

        Doug Kaye from IT Conversations has been doing some pretty heavy stuff on EC2. He did a podcast with an Amazon guy on Technometria where they got in to a lot of detail have a listen [conversationsnetwork.org].
      • >don't know anyone using AWS, which is part of why I'm looking for Slashdot reader feedback.

        I smugmug www.smugmug.com uses AWS, or at least S3. This is a pretty big for pay photo sharing site.
    • No (Score:3, Interesting)

      Amazon just has a very interesting service architecture. This is why you keep seeing articles all over the place about it.
    • FWIW, Amazon Web Services did host the Seattle Slashdot anniversary party [pudge.net]. I'm not suggesting there was any impropriety, however.
  • How much bandwidth transfer a month can I get there, and how much does it cost? What's the max sustained bandwidth that I can get from one of their servers?

    And if I'm competing with Amazon by running a popular streaming radio station (even paying the required royalties, but of course not to Amazon), will they start shutting me down?
    • Re: (Score:3, Informative)

      by brunascle ( 994197 )
      pricing and bandwidth is oulined here [amazon.com] about halfway down the page. and a nifty pricing calculator here [amazonaws.com].

      looks pretty reasonable to me, but i dont really have anything to compare it to. no minimum fee. it's completely based on bandwidth, resources, and usage.
      • The calc shows that data transfer costs $0.18:GB out ($0.10:GB in), with no maximum (or minimum) charge. It doesn't show max bandwidth, but I'd expect Amazon to have some fat connectivity, though I'd want a CIR (Committed Information Rate, or guaranteed minimum rate) for any real pro application.

        But I can get data transfer (in or out) for $0.05:GB up to 2TB:mo, with root access on an actual dedicated server, not VPS ($0.03:GB for VPS). At a datacenter I've used for a couple years, with good support, >99.
        • Um, you forgot to tell us who your existing hosting company is.
        • by gfilion ( 80497 )

          It doesn't show max bandwidth, but I'd expect Amazon to have some fat connectivity, though I'd want a CIR (Committed Information Rate, or guaranteed minimum rate) for any real pro application.

          They don't have a CIR, but I remember reading in the docs that they have 250 Mbps per virtual machine.

          So while Amazon looks interesting, I think I'll keep my existing hosting company, which is anywhere from 2-6x as cheap as Amazon's new, relatively untested service, with potential competition from Amazon's own services.

          It's true that Amazon is more expensive than a dedicated server, but the idea is that it's elastic: you could run 3 servers during peak time (let's says 4 hours per day) then scale down to one the rest of the day. This is cheaper than 3 dedicated servers.

          • Well, 250Mbps is (250 * 30.48 * 3600 / 8) 82.296TB:mo max. That would cost $12,439.28 per month at Amazon, and (at $0.05:GB->2TB/$1:GB@2TB+) $79,896 at my "cheaper" host, but only because the punitive per-GB cost of exceeding the 2TB cap at my host. Since I load balance already, the 82.296TB would be split across 41 or 42 $100:mo servers, which would cost $4200 at my host.

            Spending that $4200 at Amazon gets 23.3TB, which at my host would cost $1200. That $1200 at Amazon would get 6.7TB, which would cost $
            • Re: (Score:3, Interesting)

              by vidarh ( 309115 )
              At my last company we were looking at EC2 as a "backup" solution to handle spikes - for that it may be cost effective. But looking at our bandwidth graphs, and the cost differential, spikes of the kind of magnitude where it'd make a difference were incredibly rare. We had maybe one event over 2+ years where it'd made a difference. If you prepare your system for virtualization anyway, you could handle that by bringing up just extra capacity on EC2 and using your cheaper host for normal day to day use.

              In fa

              • That is an excellent compromise.

                Is there a way to start a minimal account at EC2 which just idles away doing nothing, and a virtualization account somewhere else (like at my cheap, but flat rate per server, provider), and then very quickly clone my running virtual session over the Internet to the EC2, where it quickly starts running to handle the spikes (then shuts down)? With that setup, I could use EC2 as purely generic spike capacity, and just need as much advance warning that a spike is going to max out
            • by a9db0 ( 31053 )
              I know you've been asked before, but I'll try again:

              Who is your "cheaper" host?

              I'm not asking to be a pain, I'm just interested in alternatives.

              • I avoided mentioning it because it's a competitive advantage I'd prefer not to share with all of Slashdot (and my competitors, who read Slashdot, especially stories like this one).

                You can feel free to believe that I'm making it up. I probably would, if I were in your shoes. I'm not making it up, but I'm not going to give it away here. Sorry.
  • NOC (Score:2, Informative)

    I think you are confused ... all the NOCs of Amazon could go down and your servers (which are in a Data Centre) will continue to operate. http://en.wikipedia.org/wiki/Network_operations_center [wikipedia.org]
    • by jamie ( 78724 ) * Works for Slashdot
      Ah, I was using the term to mean data center. Didn't realize they were sometimes physically separated. My misunderstanding.
  • Some more about EC2 (Score:5, Informative)

    by jamie ( 78724 ) * Works for Slashdot <jamie@slashdot.org> on Thursday March 27, 2008 @12:05PM (#22883434) Journal

    So here's a little about what EC2 actually is, for those of you who don't know. You don't have to reply here, start your own comments ;)

    The Elastic Compute Cloud was originally designed as a way to host applications that needed lots of CPUs, and the option to expand by adding more CPUs. It's a hosting service that lets you start up virtual machines to run any software you want: they have a wide variety of pre-packaged open-source operating systems you can pick to start up your VMs with.

    Starting up a VM takes just a minute or two, and it's point-and-click thanks to the Firefox extension [amazonwebservices.com]. Each VM comes in one of three sizes [amazon.com]: small (webhead), large (database), and extra large (bigass database). They cost respectively $72, $288, and $576 a month (billed by the hour), plus bandwidth ($0.18/GB out, somewhat cheaper for data going in and there's a price break at 10 TB).

    One of the concerns everyone raises with hosting on virtual machines is that if a VM instance goes down, you lose everything on it. It comes with hard drive storage (160 GB on the small size), but if something goes wrong, that data's gone.

    I think the rejoinder here is that, on real hardware, if something goes wrong, your data's gone. You never set up an enterprise-level website on the assumption that any particular hardware has to survive. Single points of failure are always a mistake, and backups are always a necessity. When any machine explodes - real or virtual - the question is how fast your system recovers to "working well enough" (seconds, hopefully) and then how long it takes you to get it "back to normal" behind the scenes (hours, hopefully). Those answers shouldn't depend on whether there's a physical drive to yank out of a dead physical machine that may or may not retain valid data.

    Which brings up what I think is one of the selling points of EC2: free fast bandwidth to S3 [amazonwebservices.com], Amazon's near-infinite-size, redundantly-replicated data storage platform. That's a nice backup option to have available. That's part of why, if I were starting a new web service, I wouldn't host it on real hardware. I'd like not having to worry about backups, tapes, offsite copies... bleah, let someone else worry about it.

    Slashdot hasn't run many stories on EC2 (none that I know of) because until now it's been a niche service. Without a way to guarantee that you can have a static IP, there had been a single point of failure: if your outward-facing VMs all went down, your only recourse was to start up more VMs on new, dynamically-assigned IPs, point your DNS to them, and wait hours for your users' DNS caches to expire. That meant that while it may have been a good service for sites that needed to do massive private computation, it was an unacceptable hosting service.

    Now with static IPs, you basically set up your service to have several VMs which provide the outward-facing service (maybe running a webserver, or a reverse proxy for your internal webservers), and you point your public, static IPs at those. If one or more of them goes down, you start up new copies of those VMs and repoint the IPs to them. No DNS changes required.

    I know there are other companies offering web hosting through virtual servers. Please share information about them, the more we all know the better.

    • Slashdot hasn't run many stories on EC2 (none that I know of)

      There have been a few. [google.com]

      Amazon EC2 Open To All [slashdot.org]
      Amazon and Hardware As a Service [slashdot.org]
      Amazon Betas 'Elastic' Grid Computing Service [slashdot.org]

    • >> One of the concerns everyone raises with hosting on virtual machines is that if a VM instance goes down,....

      I don't quite understand this one. I've heared it before and was puzzled. Do these VM's "go down" more frequently than regular hardware would?

      Or is it just the dynamic IP that makes it more problematic?

      Thanks --

      Stephan
      • It's not that they go down particularly often, but if there is a hardware failure, amazon generally will not take heroic efforts to recover your data - your instance is terminated, hope you had a backup. Sometimes they'll try to reboot it temporarily, but they won't put the drives in another machine or anything like that. So make sure your app can recover from a complete and permanent server failure, with total data loss on that server. Also, previously, there was the IP problem, and also you couldn't ensu
  • check out Mosso (Score:4, Interesting)

    by tnhtnh ( 870708 ) on Thursday March 27, 2008 @12:11PM (#22883492)
    I use Mosso - they are inexpensive and are hosted and owned by Rackspace. Therefore the service is fantastic!
  • by smackenzie ( 912024 ) on Thursday March 27, 2008 @12:14PM (#22883516)
    The more I learn about Amazon's AWS offerings... the more confused I get. I've read a TON of material, reviewed the APIs, looked at sites built on this platform and have read many blog entries. I feel like I "know" a lot, but understand very little. Someone help?

    1. What is a perfect "typical" application for AWS? (And don't answer, "one that needs to scale...". I'm looking for a realworld example.)

    2. Anyone here on Slashdot using these services? Nervous about single point of failure? (And I don't mean just technical, but also financial, legal, security, business continuity, etc.)

    3. EC2 / S3: is there any value in using just one? I've noticed there are additional services now, too

    4. In the days of SOx / PCI / CISP compliance, is it even possible to set up a financial app on AWS?
    5. Also, finally, maybe a question to Amazon... why? Someone did the financials recently and it was a fascinating study. The short of it is that at max capacity, the net income from all of AWS for Amazon is so tiny, you have to wonder why they even bothered... [need citation]

    A classic case of wanting to like the technology, but not really sure how to use it. Thanks.
    • by PsychoKiller ( 20824 ) on Thursday March 27, 2008 @12:48PM (#22883984) Homepage
      1) Don't limit your ideas about using EC2 to hosting. You can run whatever you want on their instances. Think about a company that does some kind of data acquisition/processing. You could set up a system for them that does a run in 1 hour (since that's the minimum billing slice) instead of their current process that takes a month on a single workstation (or even cluster of workstations in their office). The results get stored on S3 where they download them over an encrypted connection.

      2) Yes, very nervous. Especially with the privacy laws in the States. I'm Canadian, and I would be talking to lawyers about data storage issues before having sending customers' data down South.

      3) EC2 is useless without S3, since your images are stored on S3. S3 is useful without EC2, as you can use it for static storage and BitTorrent hosting.

      4) See my response to #2.

      5) I don't work for Amazon. :P
      • I understand that EC2 is *nix only, with nonpersistent filesystems, and that S3 is an apparently very reliable remote filesystem that you can get to really fast from EC2 for free.

        I understand the huge value of this for transient (1 month) intensive very bursty workloads. Which, mostly, seems to be what's it's targeted at.

        But for actual normal servers I don't quite see it... I mean one option is that it's cheap. Which it might or might not be, depending on who you compare it to. Maybe it's the most reliab
        • So it seems like you'd only be really interested in this if you were always going to have your main instance up, and then you were sometimes going to have none but sometimes going to have many other instances up. Past a certain scale it might be worth your time to have more instances 9-5 and less at night, or something (depending on your users)

          Yeah, that is pretty much the use case for hosting on EC2, but it's a small fraction of the market. It seems like EC2 could be much more general purpose, but at this point it isn't.

          I'm also curious whether it supports automatic instance restarting... e.g. if a zone goes down, can you tell it you definitely want it to put your instance up again in a new zone?

          Nope, so you have to have at least two instances running at any one time so they can keep watch over each other.

        • But for actual normal servers I don't quite see it... I mean one option is that it's cheap. Which it might or might not be, depending on who you compare it to. Maybe it's the most reliable option out there at some price point, but the static IPs (for instance) are pretty young to consider this true, and it's not necessarily cheaper than the discounter's dedicataed servers. If we just assert for the discussion that it's not cheaper per power, then the question is, is it advantageous in other ways?

          What I've

      • Many S3 tools, like Jungle Disk, support encryption before you upload the files. So what you're storing on Amazon's servers they couldn't give to the government even if they wanted to.
        • by Thing 1 ( 178996 )

          Many S3 tools, like Jungle Disk, support encryption before you upload the files. So what you're storing on Amazon's servers they couldn't give to the government even if they wanted to.

          In general, it's far better to do no wrong, rather than dream up ways to avoid getting caught.

          In this specific case, your VMs "automatically" read the data back in when you start them up, don't they?

          So, the warrant includes the VMs. You're not safe using EC2 for criminal activity...

          • I'm not familiar with what your options are with EC2, but with S3, my concern isn't, "How do I get away with illegal activity while getting someone else to house my data for me," so much as, "How do I protect my constitutional right to protection from unreasonable search and seizure." In the US of late, simply having a constitutionally guaranteed right doesn't really guarantee anything.

            People have a right to privacy, and corporations have a right to protect trade secrets. Or maybe you're so naieve as to b
    • by imroy ( 755 )

      As far as using S3 on its own - it would make a good store for static content. You have a site, say www.example.com, but have a separate host for static files e.g static.example.com. This has long been common practice - having a simple light-weight web server for static files (style sheets, icons and other images, etc). For S3 you setup a CNAME record in your DNS that points to s3.amazonaws.com and create a 'bucket' in S3 with the hostname (static.example.com). Bingo, cheap and scalable off-site storage for

    • Re: (Score:2, Interesting)

      by Anonymous Coward
      Here's a simple example that may make more sense.

      Say you're an indie 3D animated movie creator. You've been doing your modeling, render some scenes, done some low res proofs, some nice single frames.

      But now you want to render the entire movie, in HD. But you're not PIXAR.

      So you set up a VM to be a rendering slave, log in to amazon, "hi, I'd like 1000 machines please". Load everything up, render your movie, and you're done.

      Amazon is charging by CPU (and bandwidth). The rendering time for a movie is fixed, it
    • by Jack9 ( 11421 )
      Why?

      I remember all initial articles/hype quoting Amazon reps as saying it was a method of monetizing devalued/obsolete hardware rather than writing it off and disposing (all) of it.
    • I can answer the PCI question. VISA and the approved auditors do not currently allow virtual machines, even if you are running them yourself on your own hardware. And to have them run on someone else's hardware.... forgetaboutit.

      That may be changing soon though, with the popularity of VMs, they will have to do /something/. I hope.
    • I have a hard time placing this in the enterprise for large business, but I can see how it would be valuable for medium and small business. S3 would let them serve large media files to lots of customers and pay per usage instead of buying a fat pipe, while also having a substantial amount of burst capacity should they have an event where lots of people seek out their goods.

      From a personal level, I'm using it for two purposes. For one, I use Jungle Disk with S3, and have some specific utilities and files w
    • by kbob88 ( 951258 )
      Re #5 (why?)

      My guess is that they set up all the infrastructure to run their own systems. Then someone realized, hey, we could market this! So most of the fixed cost is already covered by their own internal needs.
    • 2: We're using EC2 on Momondo.com. Currently we have about 30 hardware hosted machines as our backbone and when we need to to scale our robot servers we need to we just add x extra servers to handle the load. It works great for our purpose.
    • by dave420 ( 699308 )
      Here's what I'm using it for:

      I'm currently developing a web-based specialised video sharing site, and I'm going to use EC2 to convert uploaded videos to .flv (and possibly h264, spec depending). That way, the main web server can create new instances should the upload queue get too long, and close them down when the queue is empty. The user uploads their videos directly to the S3 service, and get added to the queue. Each EC2, when booted, asks the web server for a video to process, and all the instances c
  • Slicehost.com (Score:3, Interesting)

    by casings ( 257363 ) on Thursday March 27, 2008 @12:21PM (#22883594)
    Cheap, affordable, reliable VPS solutions: www.slicehost.com

    I have been with them for a few months, and their interface's ease of use, and the level of support they provide are just what I was looking for.
    • Re: (Score:2, Interesting)

      by stevey ( 64018 )

      Although recently a Debian Developer was critical of slicehost [pusling.com], and seemingly in a valid way.

      Personally I host a reasonably high-traffic antispam service [mail-scanning.com] and I think Amazon's offering looks good, but as mentioned a little pricy.

      I love the idea of adding extra nodes on-demand, but I think I'm not yet at the level where it would be a worthwhile use of my time or budget.

    • in the UK, bytemark.co.uk get very good ratings for service. I am not a bytemark staff member, but I am a customer!
    • Re: (Score:2, Interesting)

      by TheRaven64 ( 641858 )
      Since this seems to be thread for whoring our hosting providers, I'd like to recommend mine [macminicolo.net]. I get a dedicated server for about the price of a VPS, and I have a human in my IM roster that I can bitch at if anything goes wrong. I've been with them now since a few months after they were on /. [slashdot.org] and have been a very happy customer. They had a few reliability issues early on, but nothing recent. The hard drive on my machine died just under a year in, and Apple refused to honour the warranty, so the co-lo comp
      • As people are recommending their hosting providers this seems like a good place to ask for a suggestion. I've wanted to move my filesystem off of the machine at my place, and onto an online provider. Unlike most people looking for shell account I'm not so interested in bandwidth costs, I figure I'll only need 200GB/mon. But for storage I want about 2TB, and this seems to be the killer. Each hosting company that I've checked has built their pricing plans around the assumption that people are hosting web-apps
        • My advice would be for a dedicated server, and talk to the hosting company. Mine will get external disks for you if you want them, so you could chain a load of FireWire disks to give you 2TB, but you're probably better off finding someone that focusses on 1U rack spaces and getting a server with a few big disks in it. Generally, if you talk to someone at the company they can provide you with something that fits your needs. If they don't have a human you can talk to, then run away and don't look back - y
  • by saterdaies ( 842986 ) on Thursday March 27, 2008 @12:24PM (#22883640)
    There's still one glaring problem. There is no persistent storage (other than shuttling data to S3). That means that if your website is database-backed, you need to figure out what to do should your instance crash. Hourly backups? Mounting S3 as a slow FUSE filesystem that you can put your database on? It's all ugly.

    And it's still not a great value. It seems cheap. $72/mo for a 1.7GB RAM server. Well, look at Slicehost and you can get a 2GB RAM Xen instance (same virtualization software as EC2) for $140 WITH persistent storage and 800GB of bandwidth. That doesn't sound like a great deal UNTIL you calculate what EC2 bandwidth costs. 800GB would cost you $144 at $0.18 per GB bringing the total cost to $216 ($76 more than Slicehost). That 18 cents doesn't sound like much, but it adds up. The same situation happens with Joyent. For $250 you get a 2GB RAM server from them (running under Solaris' Zones) with 10TB of bandwidth. That would cost you $1,872 with EC2. Even if you assume that you'll only use 10% of what Joyent is giving you, EC2 still comes in at a cost of $252 - and without persistent storage!

    EC2 really got the ball rolling, but it just isn't such a leader. Other operations have critical features (persistent storage) that EC2 is lacking along with pricing that just isn't more expensive. I want to like EC2, but their competitors are simply better.
    • by jamie ( 78724 ) * Works for Slashdot <jamie@slashdot.org> on Thursday March 27, 2008 @12:44PM (#22883936) Journal

      You get database backup by replicating to another VM, presumably one in a different "zone" for physical separation. Then that backup VM every n hours stops its replication, dumps to S3, and starts replication back up (exactly like a physical machine would stop, dump to tape or to a remote disk, and restart).

      Database high-availability is similar. In the extreme case, you replicate your live master to the master database in another zone that entirely duplicates your live zone's setup (same number of webheads, same databases in same replication configuration, etc)... then if the live zone falls into the ocean you point your IPs to the webheads in the HA zone and resume activity within seconds, having lost only a fraction of a second of data stream.

      Having dealt with Slashdot's webheads and databases losing disk, and in some cases having to be entirely replaced, I don't see how persistent storage is a big selling point. I mean it's nice I guess, but not something that I'd sacrifice any functionality for. Applications have to be designed to run on unreliable hardware.

      • In all honesty, accounting for S/N ratio, how much is a slashdot post worth?

        Thats what would matter for the failover time of lost data. But really, I'd be interested in how much a post is worth (it is content, albeit small).
    • by mveloso ( 325617 )
      It's $72/month if you're at 100% cpu all momnth. It's $.10 per cpu hour, which is retardedly cheap, because you only pay for what you use.

      I haven't used it because of the lack of a static IP. Now, it's a viable solution for the real world.
      • Re: (Score:3, Informative)

        by saterdaies ( 842986 )
        Billing is based on instance-hours not cpu-hours. So, for every hour or partial hour your instance is running, you get charged. It doesn't matter if you're a 1% cpu usage or 100% cpu usage during that time: http://www.amazon.com/ec2 [amazon.com]
    • Re: (Score:3, Interesting)

      by dogas ( 312359 )
      It seems like you answered your own question about persistent storage. S3 is persistent storage.

      If you are running a database backed application on EC2 without a master/slave setup, and your master goes down, to me, that seems like a failure to plan for the worst on your end. It's really not an argument that even though you DO have persistent storage, your data is safe on that server. Your data is never safe. Hence, a backup/replication plan is ALWAYS needed. Services like EC2 force you to think abou
      • S3 is persistent storage.
        With weak (i.e. useless) semantics that are totally different from conventional storage.

        Your data is never safe. Hence, a backup/replication plan is ALWAYS needed.
        Sure, but if your plan involves a SAN, EC2 cannot support it. There are so many best practices that EC2 does not support; effectively you have to design your app for EC2.
        • Re: (Score:3, Informative)

          by dogas ( 312359 )
          I think if your setup requires a SAN, you're too big and enterprisy for EC2.

          S3 has been working well for us. While the semantics are different than typical storage, I would argue that they are far from useless. Since files on S3 can be made publicly accessible via a web address, we use S3 to host our assets for our website (css, javascript, images), as well as db backups and other backups.

          We have not had to design our app for EC2. We do make use of S3 for storing user data, so we have S3 libraries in our
    • Re: (Score:3, Insightful)

      by dubl-u ( 51156 ) *
      Well, look at Slicehost and you can get [...] WITH persistent storage

      The Amazon machines offer storage that persists for the life of the virtual instance. That's until you kill the instance or until the hardware fails. (It does persist through reboots and OS crashes.) And unless Slicehost is running some crazy magic beyond the RAID-10 setup they mention, a hardware failure could still wipe out your data, and will certainly cause downtime during which you will have an opportunity to wonder when and whether y
  • My major concern (last time i checked) was fail over & virtual ips. I think they fixed this with the new elastic ip. I will have to check again.

    However, another issue i had was to send traffic between 2 EC2 nodes. They don't mention (maybe i missed) nor guaranty the bandwidth between the nodes in the same availability zone. This is crucial if you are trying to run a very fast performance tests between the 2 nodes and you need minimum delays. I am not sure if the bandwidth between the EC2 nodes is caped
  • We looked at the EC2 solution when we started developing our hosted offering and didn't care for the new IP address when, and if, something went down. We went with a hosting company called LayeredTech. They offer public and private VPS and VPDC solutions. The really cool thing that has impressed me is they run 3Tera's AppLogic platform. It lets you visually (through a web ui) create "applications" based on "appliances". There is a standard portfolio of prebuilt applications (SugarCRM, etc.) and templat
    • No, no one has used AppLogic because the minimum price just to try it out is hundreds of dollars per month. EC2 is somewhat flawed but they are getting a lot of business because it is so cheap to try.
      • Re: (Score:2, Informative)

        When you use LayeredTech as your hosting provider, it's included with no separate license price. LT's prices are very reasonable as well.
  • by dogas ( 312359 ) on Thursday March 27, 2008 @12:41PM (#22883886) Homepage
    My company uses EC2 + S3 + SQS + Rightscale (http://rightscale.com) to manage our infrastructure.

    First off, Amazon has an excellent product. It is essentially Hardware As A Service, and the tools they provide abstract it as such.

    The most common argument against using EC2 for hosting is that if your server goes down, you will lose any data created since the last time you saved a snapshot. While this is true, it forces you to bring a backup + recovery plan to the front of the table. Provided you have a backup + recovery plan in place, you no longer have to worry about fixing a server ever again. If something goes wrong with one of our application servers, I would simply fire up a new instance, link it in with DNS, and terminate the old server. With rightscale, this is all pushbutton.

    Consider that scenario with running your own colo server. You could potentially spend hours diagnosing + fixing an issue with a server before you could bring it back up. Ok fine, the way to mitigate that is to have a hot backup running. But now we're talking about a ton of cash to support 2 servers on a month-to-month basis. We have found that amazon's costs to run EC2 instances are very competitive for the specs.

    Note: I'm not a shill for either rightscale or amazon, I just find that these 2 companies are the forefront of where hosting is going, and their products are awesome. It's all about virtualization!
  • Some days ago I posted an article [caravana.to] on my blog in which I try to compare different cloud services and also give my 2 cents opinion about the technology itself (disclaimer: I directly tested only two services, EC2 and GoGrid.)

    Beyond the comparison, in my post I say that I was wrong trying to use a utility computing platform as EC2 like a web hosting platform; also, there other very interesting uses of the technology behind the clouds (e.g. creating disposable environments for application testing.)

  • This is a service I find interesting and appealing in many ways, and I intend to investigate it further after reading this thread. But upon using Amazon's handy calculator, my costs for comparable services would be roughly *6 times* what I'm currently paying for leasing two physical machines and the bandwidth to go with them. For quick projects to test out something, this would be a good service. But for a day in/day out stack, I don't think this is it, at least not for me.
  • We've been using Joyent [joyent.com] and are very happy. I've used EC2 for a few things and I think Joyent is more economical for many applications.
  • Does anybody have experience with using EC2 as failover? Can it be fully automated?

    I operate a regular database backed web site, and have spare servers sitting around in case something goed awry. It would be great if I could avoid that redundancy and set things up so that EC2 instances get fired up if my heartbeat server detects the site is down, pipes the database over (or the latest backup if that's unavailable), and then redirects the load balancer to the EC2 instances. I'd like to do all of this witho

    • by dave420 ( 699308 )
      You can easily start up and terminate instances using the utilities they provide, or the API. Your heartbeat server would easily be able to (re)start instances. Just create and configure your "blank" EC2 and use the utilities provided to upload the image of that EC2 to S3, then use the code it returns when creating instances, and that image will be loaded automatically. EC2+S3 is very powerful, in my opinion. But then that's me. I'm using it to process videos in a queue. I can start and stop instances

Genius is ten percent inspiration and fifty percent capital gains.

Working...