Merry Christmas and Happy New Year.
I'm working on the Jerusem Circles. I'm using Blender to create them and when they're done, I'll put them in the Virtual Urantia System as both a separate scene and on the Jerusem sphere itself. Finally, a bit more content from the book. Programming takes so much time that content becomes a low priority. Can't wait till Native Client is released in the first quarter. It's got me really excited. The idea that when I make a change or add content to the system, it'll be immediately seen around the world sends shivers of excitement through my bones. I hope it is everything that Google says it will be.
A little more on the circles. The circles are interesting. The centers have different structures. One is a monolithic, transparent crystal that is 50 miles in diameter and serves as the administrative center of the system. Another has 619 temples acting as a working model for Edentia. It's forty miles in diameter and is an actual reproduction of Edentia true to the original in every detail. Wow! Splendor comes to mind.
I believe the circles and triangles and gardens of Jerusem will probably be second in detail to Urantia itself. Urantia has this broad history running through it's evolution and then it's detailed history right up to modern times. I haven't decided how to approach changing the times on screen yet. Maybe a double clock floating in midair to the right of the planet. One showing large scales to be selected and the second being a finer split of time. When the hands are changed you see changes taking place on the planet.
I've figured out how I will program the flowing of the races across the planet. Imagine seeing the racial colors blend and seeing the colors float across continents while seeing the clock move it's hands. I'll use a morph shader for that one. Everything is ready in the desktop world, but alas, not in the world of the Internet, which I realize is the only world that is going to matter. One must be persistent and tackle the grunt work if beauty is to be expressed.
With that thought, picking has become quite a problem. Qt3D may have a real problem with picking, picking once again, is that act of selecting a hand on the time clock of the Earth I was mentioning and then moving the hands forward or backward. There's a problem when having large amounts of objects in a scene where I use the same geometry across different objects, like spheres maybe, where it picks the original geometry only. That won't work. Hopefully, I can be ready by the start of the New Year to explain to the 3d team what is taking place within their system that is causing this picking problem.Or maybe I'm doing something wrong. The brain, though, is ready to handle all the tasks of passing the clock's hand changes to the system. That works and has worked for a long time now. The brain that drives the Virtual Urantia system works well, it runs Power Plants and all the features that are needed to create the trip to Paradise are there in the desktop world but not in the Internet world. Once I'm done getting this OpenGL ES2 and Internet jump, content will once again take first priority. Nothing but fun here.
A way of placing parts of the Urantia Book into a program form that displays and manipulates concepts from the Urantia Book in 3D. The creation interface allows multiple creators to work simultaneously on the Internet.
Monday, December 27, 2010
Monday, December 20, 2010
Jerusem circles and Native client
Good news Native Client will release at the beginning of the year in Q1. It'll be part of Chrome 10's release. Hopefully Qt will support it soon after. Also working on getting the Jerusem circles done before I forget what I'm doing all this for.So this is the story. Get Qt3d working so that I can place 3d in the web browser, get the effects system transferred to it also, then get it working on Native Client which runs the program directly from a browser without installation and then create the effects for the the trip to Paradise. I'm skipping a lot of steps but it's the general direction that matters here.
Im getting closer every day.
Im getting closer every day.
Friday, December 17, 2010
This picking thing is taking a while
Since I can't seem to get Qt3d's system working on picking, I'll add the code into a sample cube scene program. If it doesn't work there, I'll add it to Qt3D's bug list and send them the program. The're really a responsive group.
Also, on the embedded side, the instrumentation computer from Wago (their automation systems are worldwide) won't support 3D accelerated graphics. It makes me wonder how many TV's will support the running of 3D programs from the Internet. Since some of them use the 3D glasses, I should have a lot of success with the new TV's.
Here's the correspondence I sent to Wago, leaving out names.
Using the Mesa 3D package, I can still use the IPC. It wouldn't have any acceleration or shader effects, so I would lower the 3D frame rate to a crawl but they can still see their stuff in 3d. I'm hoping that your engineers will use the OMAP 4 TI CPU. A local Panda Board, which has a twin ARM based solution with a 3d graphics accelerator, connected to your local ethernet based coupler will give me 3d accelerated graphics. At 175.00 plus the cost of a 5 volt power supply and cables for the touch screen will work.
Thanks for the effort
Pierre Chicoine
Tuesday, December 14, 2010
A picking we will go, a picking we will go, hi ho the merryo
Figured out the picking scene thingimabob. Yes it's true, an extra scene handler is needed to handle picking and now that I understand what they want, it's all mechanical now. I know, I know, you're saying to yourself right now.. Pierre, tell me more, I'm just so fascinated with this picking stuff, I'm just biting at the bit to know. Well I wont leave you hanging. QGLAbstractScene is needed to do picking. Who would have thought.
Is this the way to Paradise?
Is this the way to Paradise?
Thursday, December 9, 2010
Back To The Future with Qt3D
Had a lot of help from the Qt Qt3d team and got some speed back on large scene viewing. Large scene, that's where there are a lot of objects (meshes) in the scene. The Ubook Paradise trip has a lot of objects so it has to work with enough speed to spare so that the 2d menu and combo box stuff respond without crawling. My test platform has 937 objects and before the fix, it took 10 seconds just to turn the scene.
It's a breath of fresh air and I'm back on the road to getting it done in 2011. Picking (object selection) is up again.
On another note. Why use my system to create the trip to Paradise? After all, there are a lot of systems that can do a far better job of creating the trip to Paradise, Blender, for one. The answer is this: I created the system with the capability to manipulate the environment such that when you click on Jerusem, you get information from within the book that shows Jerusem stuff. That kind of manipulability relating directly to the Ubook cant be duplicated with any of those fantastic packages. They are built to do animation for movies and the like and not to be an interactive educational system with direct database connections to the book. Also, from what I've seen, they cannot animate from events such as mouse clicks. Further, if they we're capable, they can't output that capability to the Internet. Also when on the Internet, they can't share creative experiences like interactive creation of differing scenes over the Internet.
There are, literally, hundreds of reasons why I chose to use my system. The primary being is that I could. I dream of a day when thousands of people create Ubook stuff in collaboration.. Before that can happen, I need Qt on Native Client. 2011, I hope.
I feel the drive inside pulling me towards that dream, a kind of unconscious attraction. Where it comes from, I don't know. If it doesn't happen, it won't be because I didn't try, It'll be because I have lost my ability to program. Even if all I could use is my nose, that's what I would continue to enter programming text with.
It's a breath of fresh air and I'm back on the road to getting it done in 2011. Picking (object selection) is up again.
On another note. Why use my system to create the trip to Paradise? After all, there are a lot of systems that can do a far better job of creating the trip to Paradise, Blender, for one. The answer is this: I created the system with the capability to manipulate the environment such that when you click on Jerusem, you get information from within the book that shows Jerusem stuff. That kind of manipulability relating directly to the Ubook cant be duplicated with any of those fantastic packages. They are built to do animation for movies and the like and not to be an interactive educational system with direct database connections to the book. Also, from what I've seen, they cannot animate from events such as mouse clicks. Further, if they we're capable, they can't output that capability to the Internet. Also when on the Internet, they can't share creative experiences like interactive creation of differing scenes over the Internet.
There are, literally, hundreds of reasons why I chose to use my system. The primary being is that I could. I dream of a day when thousands of people create Ubook stuff in collaboration.. Before that can happen, I need Qt on Native Client. 2011, I hope.
I feel the drive inside pulling me towards that dream, a kind of unconscious attraction. Where it comes from, I don't know. If it doesn't happen, it won't be because I didn't try, It'll be because I have lost my ability to program. Even if all I could use is my nose, that's what I would continue to enter programming text with.
Saturday, December 4, 2010
Up And Running on the N900
Did my compile for the N900, it worked. The N900 is Nokia's Linux phone based on the OMAP Arm embedded processor. I have a ways to go and a lot of programming to get the basic primitives back up and running on the Omap Arm processor. The program runs but no 3D. Had some two months ago, so I have some backtracking to do to get cubes back up on the system. First cubes and rectangles, then lines, then cylinders, then spheres, then complex objects and a few other dynamically created shapes and I'm home free. Maybe by 2011, I'll have 3D up and running on embedded platforms. Then when Native Client releases and is made to run on Qt, I'll be running everywhere and I can concentrate on getting content and pretty scenes up for the book.
When all that's done, I'll be licensing the system. Can't wait to get there.
When all that's done, I'll be licensing the system. Can't wait to get there.
Thursday, December 2, 2010
Picking Objects in an OpenGL scene
Ok this one's a bit technical.
Picking is the ability to select an object with a mouse or on the screen. Up to now, the picking system worked flawlessly but picking now has to be upgraded from OpenGL to OpenGL ES2 if I'm to run on the Internet using Native Client. In OpenGL, picking was handled automatically with a simple function that OpenGL use to show me index number of the object but now it's not there anymore. So here is how I'm going to do it. Instead of figuring out the an imaginary ray in 3d space that traverses all the objects under the cursor, a complicated scheme, I'll do with coloring the objects. It's a lot easier to program. Every second of time, I recreate every object in the scene more than 30 times a second, that's a lot of stuff happening really fast. Since I tell the system what color each object is going to be anyway, then on the frame that is sent when the user clicks, I will color each object with a unique color. Then after the scene is drawn, I'll get a picture from the graphics card of the scene, then look at the X and Y position of the dot under the click of the mouse and using a pre-created list matching color to object, I'll know what object has been selected by using the color as the index into the list. Voila object picking. Well, that's the theoretical side anyway, now I have to figure out the code that makes it work.
You know, it's almost like I know what I'm doing.
Picking is the ability to select an object with a mouse or on the screen. Up to now, the picking system worked flawlessly but picking now has to be upgraded from OpenGL to OpenGL ES2 if I'm to run on the Internet using Native Client. In OpenGL, picking was handled automatically with a simple function that OpenGL use to show me index number of the object but now it's not there anymore. So here is how I'm going to do it. Instead of figuring out the an imaginary ray in 3d space that traverses all the objects under the cursor, a complicated scheme, I'll do with coloring the objects. It's a lot easier to program. Every second of time, I recreate every object in the scene more than 30 times a second, that's a lot of stuff happening really fast. Since I tell the system what color each object is going to be anyway, then on the frame that is sent when the user clicks, I will color each object with a unique color. Then after the scene is drawn, I'll get a picture from the graphics card of the scene, then look at the X and Y position of the dot under the click of the mouse and using a pre-created list matching color to object, I'll know what object has been selected by using the color as the index into the list. Voila object picking. Well, that's the theoretical side anyway, now I have to figure out the code that makes it work.
You know, it's almost like I know what I'm doing.
Subscribe to:
Posts (Atom)