One of my research interests are data structures for real-time computer graphics, such as scene graphs. Thus, I've been experimenting with SceneKit since it appeared in the Apple SDKs for Mac and iOS a while ago and coined as a seemingly mature technology promising to solve quite a few problems in computer graphics.
However, as of now I can only say: its an opportunity missed. For the sake of performance, SceneKit optimizes the hell out of the graphics pipeline, it does an awful lot of fancy stuff in the background - COLLADA files look beautiful and animations seem just to work, FX is just a few flags away. But all this comes at the cost of making the user suffer from unexpected behavior. Now, we all know that scene graphs are optimization techniques for 3D graphics and they work wonders with the right balance of control and "magic". There are a number of flavors for scene graphs out there - but repeat after me: SceneKit is not one of them - it is not a scene graph in the sense of computer graphics. What it describes SceneKit best is a hierarchical data structure that is shuffling models and animations as efficient as possible to the graphics card and that's that.
A huge problem with SceneKit is that it has no user accessible state. Everything is asynchronous and there is no access to the traversal stages and to the transformation stack. This also means one cannot get frames rendered in sync with other things that might happen. My attempt was of course to build an Augmented Reality application with it but as I take frame-coherency serious I stopped right at the fist few renderings. I came to accept that we have huge latencies in the camera capturing part but now additional latency in the rendering makes things even worse. I am hoping that there will be additions and mechanisms in SceneKit to trade performance for control but I'm not holding my breath for it.