The thing is OpenGL 4.4/4.5 is actually quite good. But there’s soooo much technical baggage behind it; there was a discussion in Twitter but Twitter’s 140 char limit is not enough for this.
So well, here’s my expectations of GLNext:
- Unified GLSL compiler. This is actually been promised as a common shading language intermediate representation. No more vendor suddenly saying compiler error while another one runs it fine, and another one renders whatever it wants.
- No legacy calls like glTexImage/glCompressedTexImage vs glTexStorage. Just leave one (the superior one, glTexStorage)
- No hazard tracking. Treat everything as if it were persistent mapped/unsynchronized mapped. I know how to use ARB_sync. I can put them where I need. Update: Clarification, by “treat everything as it if were persist. mapped” I mean the model (that there is no hazard tracking in GL4 for those types of mapping), not that everything must be persistent mapped.
- Standardized Multi-monitor support.
- Standardized Multi-GPU support is a plus.
- Fix the need to create a GL context to get the true GL context fiasco (to get wglCreateContextAttribsARB…).
- Change the name. I don’t want to google “OpenGL 5 how to do <xxx>” and find an article from 1997 among the top 10 (based on a true story); or find snippets of code that would obviously break in 5.0 but worked fine before 4.0 (specially if Khronos actually delivers on my no-hazard tracking request!). This creates a lot of miss-information. OpenGL 4.4/4.5 is quite good, but you don’t know how many people just doesn’t know how to use it correctly because the tutorials are all mixed up.
- Multithreading support in the standard. Let me create an “object-context” per thread, child of the main GL context; which I can use for each thread. I can create, map buffers, validate commands, compile shaders from within these threads; and then in the main thread append all the commands that have been queued in the other threads into the main one. The only ones that require locking the main thread are buffer creation and mapping/unmapping (and textures). Yes, this sounds very much like the failed deferred contexts or D3D12’s bundles. Shared contexts is AFAIK not very thorough on the standard, making it a wild west of what works and what doesn’t depending on each implementation whims; and furthermore since the GL std isn’t multithreading aware, there are no ways for us to help with the synchronization, forcing the GL driver to lock much more frequently than it should.
- Bindless Textures only. It’s great, and I love the idea of explicitly controlling texture residency. The old method of texture units can die. Backwards compatibility can be achieved in software by keeping permutations of textures needed by a material (i.e. tex A goes in tex unit 0, tex B goes in tex unit 1) into immutable unique tables; where in D3D11 these tables call the appropiate set texture units; and in GLNext the table just binds the UBO that is already filled in GPU with the necessary handlers. Update 02: I’ve been informed that at the time of date only 2 IHVs support bindless (the 3rd major one on desktop doesn’t) and I forgot GLNext targets mobile as well. So obviously bindless only will not happen for obvious technical reasons.
So, well. That’s my whish list. I had to get it off my chest.
Hi Matias…
I really appreciate all your in depth insight into OpenGL and from the perspective of someone who wants OpenGL to not suck…
I have a question for you… as a beginner to OpenGL… but actually I’m an expert outside of OpenGL (wrote my own compiler and far harder things than that)…
So… where is a good place to start… if I want to make a game engine? I want my game engine to be all about space… outer-space, the stars, planets, etc. So I need to make my own as there isn’t anything good enough out there, for what I need.
Specifically… “What has a good future”. I read about how “Stateless/bindless OpenGL is the future”… but can I use this right now? Or can I at least design my current API with the future in mind?? If so what should I do to kind of wrap “Both current and future OpenGL behaviours” together? (I suppose I should just simply use the future API right now and try to emulate it in terms of the current API, right?)
Is there an engine that takes care of all these low-level horribleness without being too big? Something that’s is really designed as “a smart wrapper to make OpenGL easy to use and fast”… rather than an actual game engine?
I’m finding about 1000 different OpenGL game engines, wrappers, libraries, etc… all offering to solve all sorts of different issues… I don’t even know where to start… But one thing I DO know is I don’t want to be trapped by a bad library or have it waste my time… like has happened in the past.
I don’t want to spend a week learning a huge “conceptual behemoth” (unity3D) just to draw a few voxels on screen! Just to find out that I could have used a better library…
Hi, sorry for taking long to reply. I’ve been sick in bed for the last few days. Still recovering.
Well, of course I will recommend you to try Ogre 2.1 (https://bitbucket.org/sinbad/ogre/ & http://www.ogre3d.org/forums/viewforum.php?f=25) which is the graphics rendering engine I’m working on.
Ogre 2.1 is very efficient and very flexible, using modern practices. It will generate shaders automatically for you, and you can also write your own. Though because it’s still a work in progress and very new, many old Ogre libraries were not ported.
For some this is a show stopper (specially for long time users or people looking to use an existing component for most of the problems they find), for others, they don’t mind since they weren’t using them.
However it’s not specialized in voxel rendering (at least not ala Minecraft style) since GPUs deal with triangles (or compute shaders), but certainly a custom Hlms implementation could render them (you would have to write it yourself).
BGFX (https://github.com/bkaradzic/bgfx) is also a small rendering library.
But since you’re interested in such low level:
OGL has a nasty initialization scheme because it has been patched for 20-25 years. In Windows for example you need to create a context, grab some useful stuff, then initialize another context.
Therefore there are utility libraries (like glew and gl3w) that will help you handle that part.
“Specifically… “What has a good future”. I read about how “Stateless/bindless OpenGL is the future”… but can I use this right now? Or can I at least design my current API with the future in mind?? If so what should I do to kind of wrap “Both current and future OpenGL behaviours” together? (I suppose I should just simply use the future API right now and try to emulate it in terms of the current API, right?)”
Well, yes. That’s what DX12 did. They’ve created the notion of descriptor tables; which is simply an array of memory in which each entry specifies what texture to use; and separate HW in tiers, where Tier 1 can only reference up to 256 textures per descriptor (the HW limit in Intel Haswell GPUs, and GeForce 400 & 500 IIRC), so that’s the lowest common denominator. While Tier 2 can reference a virtually unlimited amount.
However in GL you’re limited to 16 textures per stage, unless you use GL_ARB_bindless_texture extension which is bindless textures. You can google about GL_ARB_bindless_texture on how to use it.
I’m afraid I can’t go on because there is enough to cover one or more blogposts. You can start your research with:
apitest https://github.com/nvMcJohn/apitest
Riccio’s samples https://github.com/g-truc/ogl-samples
I’m not much into voxel rendering, so I cannot point you to an efficient voxel rendering library, if such thing exists. You would do better asking Markus Persson or Jon Olick for some directions.