

If that’s what we’re meaning when we talk about “tipping points”, yes, they exist. But as you yourself said, “We don’t necessarily understand exactly how close we are.” The idea that passing some arbitrary line like “1.5 degrees” is a point of no return is unscientific nonsense, and that’s what the vast majority of people mean when they say “tipping points.”
And the point is, none of that changes the need to keep working towards improvement. Every fraction of a degree less the planet heats will make a difference. Even as monumental climate changes occur, those changes can be lessened, their impact reduced, by any amount that we decarbonise the atmosphere.
If you’re under the impression that I’m arguing against climate change being real in any way shape or form, or that I’m arguing against it being utterly catastrophic, you’ve missed my point so badly that you might as well be reading it in a different language. My point is very, very simple; there is never a point where we get to give up.
No matter what happens, every effort to reduce the damage to our climate will save lives. Things can always be worse, and because things can always be worse it ontologically follows that things can always be better, even when the definition of "better’ is “fewer people die.”
The fight isn’t lost or won. Get those concepts out of your mind. Suzuki - as brilliant as he may be - is an idiot for invoking them like this. He’s speaking about a very limited, very specific piece of the fight, but he should have understood that the public would take his words entirely out of context. The people who want to poison and destroy our planet for profit are, right now, actively pushing the propaganda that the battle against climate change is over. They are wrong, and they are lying. The battle against climate change is a battle to reduce harm, and you can always reduce harm, now matter how great the scale of the eventual harm may be.
The key difference being that AI is a much, much more expensive product to deliver than anything else on the web. Even compared to streaming video content, AI is orders of magnitude higher in terms of its cost to deliver.
What this means is that providing AI on the model you’re describing is impossible. You simply cannot pack in enough advertising to make ChatGPT profitable. You can’t make enough from user data to be worth the operating costs.
AI fundamentally does not work as a “free” product. Users need to be willing to pony up serious amounts of money for it. OpenAI have straight up said that even their most expensive subscriber tier operates at a loss.
Maybe that would work, if you could sell it as a boutique product, something for only a very exclusive club of wealthy buyers. Only that model is also an immediate dead end, because the training costs to build a model are the same whether you make that model for 10 people or 10 billion, and those training costs are astronomical. To get any kind of return on investment these companies need to sell a very, very expensive product to a market that is far too narrow to support it.
There’s no way to square this circle. Their bet was that AI would be so vital, so essential to every facet of our lives that everyone would be paying for it. They thought they had the new cellphone here; a $40/month subscription plan from almost every adult in the developed world. What they have instead is a product with zero path to profitability.