Google's Compute Engine Now Offers Machines With Up To 64 CPU Cores, 416GB of RAM (techcrunch.com) 74
An anonymous reader shares a TechCrunch report: Google is doubling the maximum number of CPU cores developers can use with a single virtual machine on its Compute Engine service from 32 to 64. These high-power machines are now available in beta across all of Google's standard configurations and as custom machine types, which allow you to select exactly how many cores and memory you want. If you opt to use 64 cores in Google's range of high-memory machine types, you'll also get access to 416GB of RAM. That's also twice as much memory as Compute Engine previously offered for a single machine and enough for running most memory-intensive applications, including high-end in-memory databases. Running your apps on this high-memory machine will set you back $3.7888 per hour (though you do get all of Google's usual sustained-use discounts if you run it for longer, too).
Alt GPU service I've been using - FloydHub (Score:5, Informative)
I'll have to check out this Google offering too, but since this is pretty relevant topic to stuff I've been doing recently - another online option is FloydHub [floydhub.com]. They are cheaper than AWS, and I like them because they bill per second of use, and you can choose to use GPU or CPU services (the CPU seems kind of slow). They also have really nice support for python and Jupyter notebooks running on the server, along with the ability to upload large datasets (for machine learning jobs) and an API for programatic access to the computational services.
It's mainly targeted to deep learning stuff so if you need GPU for other things it may not be as useful. But if you are playing with Deep Learning this kind of service makes training models way more feasible for those of us who normally do not buy really expensive GPU's. It'a also nice to do what you can to support companies trying to go up against Amazon or Google, and the FloydHub people have been very responsive to questions I have asked.
Re:Alt GPU service I've been using - FloydHub (Score:5, Funny)
The wages of sin are death, but after they take out taxes, all you get is a tired feeling.
Paula Poundstone (IIRC)
Re: (Score:2)
If you want a big cloud instance, AWS has a 128 core 1,952 GB instance type (plus a smaller version the size of Google's). The Spot price looks like $3/hour as I type this, cheap to run a few benchmarks.
Re: (Score:2)
Yeah, was going to post something along these lines.
Woohoo, Google now includes instances up to half of what you can do on AWS! And, without the flexibility of getting not only the 64 cores and almost 500GB of RAM, but also getting EIGHT dedicated 1900GB NVMe SSD you get with an i3.16xlarge!
Re: (Score:2)
Worst. Haiku. Ever.
Have a limerick (Score:2)
He lived with a very odd goat
The goat didn't fuck
So he gave it a buck
He says the media misquote
Re: (Score:1)
in seventeen syllables
is very diffic
Compared to Amazon..... (Score:4, Informative)
Re: (Score:2)
You have to pack your data up, place it into a track and drive it to a factory for further processing.
A middleman takes a lot of your cash and returns a processed product to be collected.
You drive your data back to your cottage and sell the result.
Time to build a local super computer coop and remove that profit loss to the big computer factory owner.
Re: (Score:2)
I want to like the cloud but it's too expensive for me to do anything other than dabble. About a week ago I ran 8 cores for a week on the Google cloud and it cost me about $30. I'd like to run a bit more but $50/month is about the upper limit of what the wife is comfortable with. So, for me, the cloud is nice, but too expensive to be life changing, so to speak.
If your wife is setting your budget, you're not in the target market for big cloud providers. Does your wife care about scalability? Does she see the benefit of being able to add 500 servers to your pool for this afternoon's peak load, and then releasing them after peak to save money? What sort of durability does she expect for your data? Does she want you to spread your servers across AZ's or regions so if one goes down, your service can still live on?
If not, then you can probably host it on your home desk
Re: (Score:2)
Amazon tells you here [amazon.com] what processors each instance type is running on. Looks like they are using Xeon, which really isn't that surprising.
Another Ad masquerading as an article Slashdot? (Score:1)
Thanks for the fake news article which is actually an advertisement. Slashdot has plunged so far.
Re: (Score:2)
Slashdot has always published press release stories, because that's who's going to submit the most... but they aren't smart enough to get paid for them.
Dammit! (Score:5, Funny)
Re:Dammit! (Score:4, Funny)
640 cores oughtta be enough for anyone.
- Cloud B. Gates
In reality (Score:1)
Re: (Score:1)
Well, you can always go to AWS and get that X1 32xlarge instance. At just $18 an hour running Windows, I heard that it makes a great Minecraft or Wordpress server.
Re:Dammit! (Score:5, Funny)
this:
I need 417 GB of ram and 65 cores.
+ this:
Hi! I make Firefox Plug-ins
- well now finally my local FF performance makes so much more sense....
Really 64 cores? (Score:4, Interesting)
Are these actual CPU cores, or just hyper-threads like with Amazon's AWS? If these are still in Google's "n1" class then by their own documentation they are indeed hyper-threaded.
Hyper-threaded virtual cores, while nice for desktop i7s, are nearly useless for large-scale compute jobs using the likes of ACML, MKL or MPI.
If these really are hyper-threads rather than physical cores then you're only going to get 32 real threads of compute performance and should pay for such.
Re: (Score:1)
Data transfer cost (Score:2)
One limitation of "the cloud" (also called "other peoples' servers") for many HPC applications is the data transfer costs. Transfering data in is cheap or free, but getting your data out again is anything but. Even if the cpu-hours would be cheap enough, it's usually cost-prohibitive to transfer a few tens of gigabytes of results out of the server and back home for each job.
Re: (Score:2)
Even if the cpu-hours would be cheap enough, it's usually cost-prohibitive to transfer a few tens of gigabytes of results out of the server and back home for each job.
Data Transfer OUT From Amazon EC2 To Internet
First 1 GB / month $0.00 per GB
Up to 10 TB / month $0.09 per GB
Next 40 TB / month $0.085 per GB
And so on, getting cheaper per GB from there. So if you're talking 50 GB per day, that would be $135/month. Peanuts for anything bigger than a mom and pop shop.
Re: (Score:2)
It's even cheaper if you are able to use AWS DirectConnect. $0.30/hour for 1GbE plus whatever it costs to get a circuit through a peering provider if you don't already have presence in a facility peered with AWS.
My company was already in a peered facility, so it was just a matter of stringing a fiber line between their edge router and ours, and setting up BGP.
$225/mo to move as much data as we please back and forth between our data center and our VPC.
Re: (Score:2)
If you're doing it in the cloud there's no reason to pull the data out of the cloud.