The terms of use for GCP does say any product that is shut down will give at least a one-year notice before it's shut down[1]. For any given product within GCP that has been shut down the lead time has been greater than a year as far as I'm aware.
The fact that Google is calling out GCP separately on its earning calls is actually somewhat of a commitment to trying to grow that business. Very few products within Google are called out separately, so that was a fairly big change.
> (d) Discontinuation of Services. Google will notify Customer at least 12 months before discontinuing any Service (or associated material functionality) unless Google replaces such discontinued Service or functionality with a materially similar Service or functionality.
A year is _crazy short_ for enterprise business, which is where most of cloud growth is right now. It's worse if that component is core to the business in any way.
Think about components you rely on for your service. How long would it take you to stand up or migrate to an alternative. Consider that you've also got all your other business to do too. You can't spent a year just spinning your wheels from a business perspective, while you're migrating stuff between services because your upstream provider decided to stop building something. You've still got to keep growing. If you're lucky, you've got staffing for meeting your business growth needs, but odds are that's barely true.
On top of having to migrate, what are your replacement's performance characteristics? Are you replacing it with your own solution instead, and if so how do you operate it, what happens under load, how do you ensure durability? etc. etc. lots and lots of questions and you've got scant 12 months to figure out what you're doing and how to get off it _on top of your existing business needs_.
This criticism has been brought up so many times, and yet routinely we still see GCP saying "but we promise to give you a whole year's notice!" as if that's a good thing.
Does any cloud provider provide a blanket "all released features will be supported for X years" to all users without a contract (as part of the terms)? I have never looked into other cloud providers, so I honestly don't know.
Also, I'm guessing larger companies could push for certain longer deprecation policies as part of their contracts on certain features. I have no insights here, just assuming from what other people on this thread are saying.
Well that’s hardly the problem honestly. AWS is surprisingly good at not dropping services or creating long-term deprecation plans. They don’t have to enter into a contract to guarantee the longevity of their services. It’s definitely what makes AWS so trustworthy. Amazon in general is pretty good at supporting both business and consumer services in the long run, even when some of those services are clearly money losing units.
That’s because the company is basically designed like a synergistic portfolio where even money-losing businesses have a purpose in the big picture.
And that’s where Google fails. It’s very well known how ruthless Google can be at killing business that don’t produce tangible results. The Alphabet re-org clearly showcases that they are playing a game of optimizing the allocation of their assets. That mentality bleeds down.
Unfortunately, you can’t run a Cloud Business like that. To Google’s merit, it seems that they are not running their cloud unit with that mentality (I don’t think I have heard about GCP services that shut down). But it’s clear there’s a generalized feeling in the industry that Google can at any point become impatient with GCP, leaving many stranded.
For Google to be a distant third in the Cloud game must be pretty hard. This is something that plays to their strengths and yet they are hardly closing the gap.
The big difference is that the companies that are winning (Amazon and Microsoft) are willing to lose in some areas to win in others. They excel at that. Google on the other hand seems to have a hard time with that shit...
> AWS is surprisingly good at not dropping services or creating long-term deprecation plans.
Case in point; AWS SimpleDB. SimpleDB hasn't been available on the AWS console for years (since 2015?) but is still accessible through the SDKs and API calls. They don't promote it or even update it (AFAIK) but it's still there and you can still use it. Amazing!
Yeah, while it's not (AFAIK) a written commitment anywhere, what I've heard is that Amazon hasn't spun down any AWS service that still has any use, just reduced visibility and stopped new work and provided a migration path for those who choose to get off.
As long as they don't visibly break it, that provides a reputation that is hard to beat without something much more than one year notice as a binding commitment.
This is very true. People forget that enterprises, especially large ones, will take it on the chin in a lot of other areas in exchange for long-term stability.
I think this is one of the reasons MS Teams has been so successful, even when there are better apps out there that provide all the same functionality and more.
I have no idea if any of the others have such promises, but wouldn't you agree that Google kind of stands out from the Three Big cloud providers when looking at the historical rate of service deprecation? It's not an accident that Google is rather (in)famous regarding that, and MS & Amazon are not.
No. Amazon and Microsoft are equally happy to kill their failed consumer products as Google. And none of them have a high rate of killing their business-focused cloud products.
But for whatever reason, when Bezos brags about how making failed products is a core part of the company culture because it shows you’re tajing risks, HN shrugs. When MS kills off billion dollar consumer bets after only a couple of years, HN just goes “lol, why would anyone have used Mixer”.
It is a clear double standard that seems to have happened since memes are self-reinforcing.
An example of this is how long it took Amazon to remove oracle, it was only recently - if a non-tech focused enterprise move to gcp and they get a year to move everything to azure/aws that is a killer.
> Think about components you rely on for your service. How long would it take you to stand up or migrate to an alternative.
Architect at a F500 here.
It would take us easily 2 years to roll out a solution, get migrated to it, and have it implemented, tested, and signed off. Probably another year (or more) to work out all of the kinks or handle the stuff that wasn't in the initial build (e.g. phase 2 expansions, etc.).
There is a reason we do 5 year TCOs, and it's not just cuz of the depreciation schedules. Also the reason we lean to things with 5-, or 10-year maintenance cycles. 1-year notice is not enough, full-stop.
Not talking about rolling our own solution here either, this is COTS being deployed in a 99.999% environment in multiple countries.
A year is crazy short if your job is to architect an ideal solution, but insanely long if your job is to simply migrate your employer onto a roughly equivalent solution. If a cloud customer isn't continuously testing a migration they're naive.
General best practice is to have a current best setup using all the first-party services on cloud-provider #1 using all their hosted services, and a fallback on cp#2 that self-hosts the services from cp#1 and uses their built-in services where better. It's what you should already be doing for benchmarking and testing but for a fallback if needed.
And once you've got the fallback planned you're free to move the active service around, almost at a whim. Even for a fortune 50 with petabytes of data - if your data isn't already hosted everywhere you're just begging to be wiped out with a simple account problem.
Imagine working for a cloud company and then also saying "If a cloud customer isn't continuously testing a migration they're naive.", not knowing that every user of your software doesn't have time to continuously fiddle with shit that supposedly work correctly at the time.
While they haven't actually explicitly said that, and certainly my employer hasn't heard that message, our AWS rep has repeatedly and loudly said that all of our planning should incorporate failure starting from the smallest (like an individual instance) up through a service in an AZ to a service in a region or a complete AZ outage, to a complete regional outage or global service outage to a global provider outage.
> shit that supposedly work correctly at the time.
I do love me a well-formed customer query.
Yeah, it's supposed to work. I get paid more it if does. But in exceptional circumstances, it won't. And in more exceptional circumstances your alerts will fail too.
As an admin what do you do with the 90% of your time that isn't actively fixing something if not planning for how to fix the next thing? We did this years before cloud meant anything other than prepare for floods.
> As an admin what do you do with the 90% of your time that isn't actively fixing something if not planning for how to fix the next thing?
Not every cloud customer has a dedicated admin team, some even don't have any dedicated developer teams. They simply wanted a website done, contracted someone and they uploaded some HTML to a cloud instance.
Sure, if you're a large company, it makes sense to have redundancy. If you're running a small company with some esoteric webshop to serve AFK customers, it makes less sense.
If it didn't take a team to design what you use then it probably won't take a team to design a fallback. But the difficulty doesn't excuse not doing it, if you don't have a fallback for whatever you pick, pick something simpler.
Ideally we keep stuff running, but I don't want to be the kind of person who just tells you that we've got it. You need to know your airbag might fail so that you take your seatbelt seriously.
> If a cloud customer isn't continuously testing a migration they're naive
this is a dumb point of view, that only cloud providers (and their employers) can advocate.
if i'm a company, being "on the cloud" doesn't automatically makes me money. my business makes me money. if every two years i have to waste a year (or even six months) re-architecting then "the cloud" is costing me people money on top of the infrastructure money.
So, you architect for multicloud once, and then in the event of a cloud provider failure (a high impact but fairly low risk event if you haven't mitigated the impact with a multicloud architecture) you just shift resources to the surviving providers and maybe throw something on the backlog to incorporate a new provider in your multicloud setup, but even if that takes some rearchitecting, the median interval isn't going to be every two years, or, most likely, even every 10.
Most enterprises won't do this, either, but it's not because of recurring rearchitecting costs.
Of course. But you can't just say you can't afford it. Fire doesn't care that you couldn't afford a smoke detector.
If you can't afford a recovery plan then what you can't actually afford is the service that needs the recovery plan that you can't afford to develop and test.
All costs, including of switching away if it fails, have to be considered as the sticker price.
That's absurd; if a customer has to engineer for and be prepared to migrate to another cloud provider at any moment, that erases any possible cost advantage for using a cloud provider in the first place. They might as well just self-manage bare metal.
As for running your own servers, that too can fail meaning you still need a migration strategy (even just to new hardware) and you need to be testing it constantly.
And no, the benefit of a cloud provider isn't that normal stuff is easy or cheap but that otherwise impossible stuff can be attempted.
> As for running your own servers, that too can fail meaning you still need a migration strategy (even just to new hardware) and you need to be testing it constantly.
That's what DR is for. We have a main bare metal site and a secondary site. Throw a couple of spare servers / switches / PDUs / Hard Drives / whatever in that space too.
Cloud options need to be better and more effective than that.
> And no, the benefit of a cloud provider isn't that normal stuff is easy or cheap but that otherwise impossible stuff can be attempted.
A virtual server is a virtual server. A container is a container. The only thing the cloud offers me is the ability to change my CapEx spends into OpEx spends. Otherwise I have to hope that the vendor won't do me dirty, and will leave me in a stable, workable place 3+ years from now.
The bare metal colo operations will. Long track record of stability at everywhere I've been. Barring act of god or otherwise unusual circumstances I know my tier 4 colo will be there next year, and the year after. Will GCP be around?
What's DR other than migrating onto what's hoped to be (but never is) an identical setup? It's like having your Amazon fallback be ... Amazon in another region.
That protects you against localized outages but not design failures or systematic outages or incompatibilities in new versions of your stack.
And it takes time to keep your DR plan up to date, patch the VMs, etc, and test it. Almost like this migration plan I'm talking about.
> The only thing the cloud offers me is the ability to change my CapEx spends into OpEx spends.
Ehh, not really. You can setup load-balancer pools larger than your entire colo, or use a globe-spanning backbone to create datasets that auto-replicate worldwide. And which are usually much easier than setting these services up yourself, let alone building the multiple zones. If the cloud is just a big colo to you then you probably shouldn't use the cloud. It's frightfully expensive.
> If a cloud customer isn't continuously testing a migration they're naive.
Most cloud customers are naive, as the stream of major enterprise breaches caused by S3 buckets without security settings that have been default for many years demonstrates.
So, pray tell, how would you assess and assign fault here? What's the actual bad action and who caused it?
You see, there's strength in the truth. It doesn't matter if drunk drivers shouldn't hit you, that's why you check both ways before crossing the street, and you'd be naive not to. (In this analogy, drunk drivers are outages...)
I don't think a year is long enough though - especially for something that's unique and foundational. A complex product built using BigTable, for example, may need more than a year to migrate to something else.
People simply know that AWS isn't going anywhere. It's the Microsoft Windows of the cloud world. Google has a long history of dropping stuff relatively quickly.
In the early days, they had a ton of special configurations, like w.amazon.com - the internal wiki, which was probably running in a datacenter somewhere.
But in recent years, they run entirely on AWS. They even have solutions architects to help them build things in a resilient way. They also spent years ripping out Oracle and replacing it with RDS MySQL, PostgreSQL and Aurora.
You guys also have “vending machines” for things like Ethernet cables in the office with price tags displayed. They’re free, but it’s to remind you not to waste Bezos’ money on frivolous things. And you pay market-rate for microwave reheated salmon in the cafeteria.
...then they go and buy more of some the most expensive office space in the world per-sqft.
Source: I interviewed there 3 times and every time left me with a dystopic impression of the company. To add insult to injury their recruiter only ballparked me only slightly above half of my then-TC.
I stopped working there in 2019 and in the few years I worked there I never saw prices on the IT kit in the vending machines. The IT vending machines were one of the better ideas I've seen at companies I've worked at. Many companies would require you to open a ticket and have an IT person hand deliver the keyboard, mouse, or other accessory to your desk. At Amazon, you can just go to the vending machine, swipe your badge, and get whatever you need, no human work required.
How is that a negative? Seems like a more efficient way to operate a company.
At Microsoft, in my buildings at least, they kept the giant PC-recycle boxes around for a few months and we were encouraged to fish-out anything useful (I know other teams forbade this, it's complicated). When there was something I needed I could usually get it from my team's admin-assistant just by asking them over Lync. I never had to file a request with IT to get any parts or equipment I needed.
> Seems like a more efficient way to operate a company.
I do actually agree with you - it's just a matter of framing the company's practices.
The terms of use for GCP does say any product that is shut down will give at least a one-year notice before it's shut down[1]. For any given product within GCP that has been shut down the lead time has been greater than a year as far as I'm aware.
The fact that Google is calling out GCP separately on its earning calls is actually somewhat of a commitment to trying to grow that business. Very few products within Google are called out separately, so that was a fairly big change.
[1] https://cloud.google.com/terms/
> (d) Discontinuation of Services. Google will notify Customer at least 12 months before discontinuing any Service (or associated material functionality) unless Google replaces such discontinued Service or functionality with a materially similar Service or functionality.