I don't know as much as LeCun (obvio) but if a house cat can do house cat stuff with the energy consumed by a house cat, then it just means the way we are doing intelligence is energy intensive.
But just like the recent revelation that an LLM if trained with trinary weighting (-1,0,+1) rather than floating point can deliver exceptional performance, it seems to me that a useful level of intelligence will get ever more energy efficient as we figure out exactly what is letting the thing be intelligent.
A five year old is an AGI without the world's data. This suggests that GI is structural, not data, and probably compute.
So I think the estimate of AGI in the next five years or so is entirely probable. Agentic research is making great strides and it is all about several inferences all working together to get a result. It is about structure.
I guess we'll see, but LeCun has his biases, and those biases are becoming ever more evident. A bit like Chomsky.
We'll all find out soon enough. Or DARPA will release what they already have in about twenty years when it isn't relevant (ha. Joke).