Yeah I think in many way yes, in some ways no.
Analog components (your I/O, etc.) stopped scaling with new nodes way back in 28nm and maybe even before then. Cache was scaling well until around 7nm and has been leveling off since. Logic (your CPU / GPU cores, etc) on the other hand, continues to scale with each new process shrink, and actually looks like it might start scaling even better as we continue.
Seems to suggest scaling is favoring logic these days while cache and analog transistors are already so much smaller than logic transistors, that they are probably already at their physical limits and in that sense, yeah Moore's Law is dead.
And then yeah you mention the efficiency part which is interesting because on one hand companies are really trying to market themselves as efficient. Meanwhile the actual base power draw is so much more these days that they used to be. Top end GPUs used to max out at 250W. Now you have a 450W (default) 4090 that is arguably one of the most efficient power to performance GPUs Nvidia has released to the consumer market, yet its still drawing far more than we used to.
Anyway, I'm interested in hardware still to an extent, but likewise hard to get excited about stuff when large market segments have stagnated and only gotten more expensive.
Based on this chart from AMD, one of the reasons chiplets made sense. Logic on the most advanced node they want to use, meanwhile offload I/O component to older nodes because there is no reason to use the more advanced/expensive ones.