The basics of the Jerusem Circles are pretty much done. I'm having a problem importing 3ds objects from Blender, seems they've changed the way they save mesh (X,Y,Z edges) information on objects exported or my system just doesn't handle all iterations properly. I'll work on getting an input from X3d type objects which Blender does export to also as I'm having a really difficult time trying to figure out why and time is sliding into 2011. I wished I'd done that in Collada, maybe later. Without this one, I can't get complex objects on the screen like the Circles, my system will probably never be used to create these complex objects. I'm about creating, viewing and manipulating the views rather than creating the objects themselves.
I will eventually use the import system that Qt3d is using now, but my mesh arrays are still based on the soon to be deprecated OpenGL system. I still have a picking problem to resolve in Qt3D before I can use it, I think though, I've bugged them enough for a while, their priorities, for now, lie in getting it up and running rather than large scene management and efficiency, at least, that's my guess. I'm waiting for a problem I found before Christmas on their system.
A note about the system itself. The system uses a primitive brain made of schematics that connect thoughts together and these thoughts can output to any 3d scene, that is, to any 3d object. Whether it be sensors from the real world or internal sensors from buttons or events, the system is able to animate objects directly, either on an on/off world or the world of analog (0 to100% ranges) and more. Events control these 3d object in such a way as to represent a directly animated view of position and events using size, position, rotation, color, transparency and the like. For instance, I can easily animate the size and color variation on blood vessels in a 3d body so that you can see, in real-time, the blood pressure. Our brains thinks in 3d, not in 2d, so there is no conversion necessary, as is the case with writing like this tome. You get used to seeing blood vessel sizes and color variations. I could easily show a beating heart that follows the heart beating accurately on screen. There is no limit to the world of 3d, that's why I believe it is the answer to presenting detailed complex information out of the book.
The financial side of the system is very important and I am making strides there also, without funds, I can't hire 3d object creators and view creators. At this stage I can't imagine being funded by anyone within the Ubook's community. With this in mind, on the automation front, I should be receiving the instrumentation for a Wago based system and that uses a protocol called Modbus Ethernet. I can use it from any operating system and I prefer Kubuntu Linux over Windows. I'll probably get into Apple but they're too proprietary and I'm trying to get away from anything that locks me in, that is, on direct automation control. Also Linux is the king of embedded devices, like phones and small controllers, like 3d TV and such. Viewing is a different story though, that, I will, I hope, run from any browser on any operating system using Native Client. A protocol, by the way, is a way for computers to talk to the real world of sensory data, it's a language where the real world can synchronize to the high speed of computers. Up to now, I've mainly supported OPC (Open Process Control), but that runs only on Windows for now, so it's not really an open protocol.
I had one viewer today, not much traffic on this blog, but at least I'm not just writing this for just me. A large part of my life has been given up to this system and it is a labor of love. I really want to do something worthy of my time here before I leave this place. That is, if I'm diligent, persistent and continue under all the circumstances that befall a programmer/developer like me.