There is so much going on behind the scenes too. So, so much. So much very few people would likely even care to share, let alone able to.
It's interesting to me where a lot of the theory for building AI comes from and how it is implemented. Based on what's publicly available and of the dozens and dozens of companies (and otherwise) working to create rationally thinking compute intelligence (able to reason), I see the solution already laid bare before us. Hear me out:
Breaking it all down in a very simple way it would essentially be networked collage of mini systems, each with a system built on one (possibly multi) model that can be learned; fact based - such as simple mathematics. Combining all of the 'base structures of factual intelligence' they get divided into intelligent-self-contained-systems and those are linked with other systems that only reason and infer with the available data. Somewhere in the system are other integrated systems which handle all of the things you need to learn to rationalize. Combine all of these systems together and add layers of complexity with subsets of these kinds of systems (ie; this large idea of a system broken down further into similarly large but 'bundled' smaller systems), and you're not far off from being able to get very, very, very close to AGI or even (if given the elective to learn and change it's own code/base structure, but perhaps still possible not to control - implement a kill switch) - a 'super' AGI (essentially uncontrollable, kill switch unattainable). And given that when you realize AI work has been a field for much longer than the current rendition of the 'internet' and subsequent technological advances, along with the introduction of worldwide computer adoption (all industries etc), we've paved the groundwork for a lot of these systems to get trained, basic, generalized - factually based data. Seems to me the stage for reasoning 'training' is already past as well - humans have been providing that kind of information in various forms readily available for analysis. And for me, this is the 'behind the scenes' issue if I have to read between the lines of the OpenAI fiasco in Nov.
Ilya Sutskever is noted for boldly understanding the power requirements and technologies needed to enact connecting 'said' systems (with 'said' systems being all my own inference noed above and simply connecting the journalism covering Sutskever is the driving force pushing to find power/technology able to make the next iterations of AI possible because the ideology already exists in some form). If you run Sam Altman's route you kind of skirt the kill switch debate (not entirely), and you end up with a sort of - "well we can always unplug" approach and let the thing go. Sutskever doesn't think that's a good idea. But I think both agree on what and how to get where they need to go, obviously; otherwise you don't end up where we are now with OpenAI (at least on the surface).
This is all just my two cents on things... interesting times for interesting technologies. 2024 is really going to get really interesting (as has been noted in the MSM).