I finally took Nachito on his first backpacking trip. We went to Desolation Wilderness, and following a friend suggestion we hiked the Pacific Crest Trail from from Echo Lake to Lake Aloha. There’s a boat taxy that takes you from one end of Echo Lakes to the other, and that shortens the hike quite a bit, allowing us to reach a fairly remote location with a real wilderness feel to it without having to walk all the distance.
HLSLParser
We are using Max McGuire’s HLSLParser in The Witness and I just published all my changes in Thekla’s github repository.
I also wrote some notes about our motivation, the changes we made and how are we using it at The Witness blog.
Irradiance Caching – Continued
Looks like I’m getting into the habit of starting article series and abandoning them after the first installment. Where’s part 2 of the irradiance caching article that I wrote several years ago? Before starting a new series, I think it’s about time to wrap that one up.
In the final part I wanted to write a bit about our record placement strategy. The main idea was to use the irradiance gradients to control the record density. That was nothing new, in Making Radiance and Irradiance Caching Practical Krivanek, et al also propose adjusting the record distribution based on the rate of change in the irradiance.
However, the interesting observation is that the scale of the gradient is not that relevant, what we care about are changes in the rate of change, that is, second order differences. If a sample has a small gradient and the one next to it has a large one, that’s an indication that the irradiance between the two changes abruptly and that additional samples may need to be taken in between them.
Irradiance Caching – Part 1
I finally finished writing the next article about the lightmap rendering tech that I did for The Witness. This one is about irradiance caching, and in particular I describe how to estimate irradiance gradients when the irradiance is sampled using a hemicube distribution. I’m afraid the article is a bit too specialized and I suspect it will only be useful to those that are trying to solve this particular problem, but I think it’s good to have this documented anyway.
This time around we are trying something new and we are cross-posting it on #AltDevBlogADay. It seems to me that most of the subscribers to The Witness blog are non-programmers, so this way it may reach a wider audience. Hopefully this will also encourage me to post more regularly!
We are hiring!
If you are following The Witness blog you may have heard that we are looking for programmers, if not, go check it out.
I’ve been working on The Witness for a bit more than a year. So, I thought it might be a good time to write a bit about my experience so far, in the hope of attracting some talent to join us.
Software Patents are Programmer’s Responsibility
The other day I read this in Dark Shikari’s blog (one of the developers of x264):
Most importantly, stop harassing the guy whose name is on the patent (Lars): he’s just a programmer, not the management or lawyers responsible for filing the patent. This is stupid and unnecessary. I’ve removed the original post because of this; it can be found here for those who want to read it.
I don’t know much about this particular case, I don’t know whether Lars came up on his own with the algorithm that is being patented or not, and I’m not really qualified to discuss that.
But that’s not what I want to write about. What really struck me from this post is the idea that programmers are not to blame for filing software patents. I think that’s just wrong.
Lawyers alone cannot create patents, you also need inventors, and being just a programmer does not absolve you of your acts. Let me rephrase that: There would be no patents without inventors willing to file them.
Obviously, corporations provide incentives for employees to file patents, but in most cases it’s not the actual incentives what motivates people to patent their inventions. It’s the benefit of being a good corporate employee, not being considered a trouble maker, not loosing opportunities for promotion, not bringing negative attention to yourself, not going against the tide.
I firmly believe that patents discourage progress and impede the growth of public domain of knowledge. Today patents do not protect the inventor’s interests, but instead promote anti-competitive practices by corporations. Moreover, most software patents are vague, bogus, or trivial, they do not serve any social purpose other than expanding the patent portfolio of your corporate masters.
You may not agree with all that, but if you do, then don’t excuse yourself blaming the system.
During the 5 years that I worked at NVIDIA I constantly came up with algorithms and software ideas that could be patented. I implemented many of them, others I simply outlined.
So, I started a wiki page in which I documented these ideas. The goal was to prevent others from patenting them. I called them anti-patents. I usually came up with a new one every month, sometimes several.
This might seem exaggerated, but when you are designing new hardware features that no one has explored, it’s very easy to come up with new things to do with it that nobody has done before. I think this is true for almost any field when you are working on the bleeding edge.
Eventually I stopped maintaing the wiki, it was too much work to describe them in detail, and in many cases I considered them trivial. In spite of that I believe that most of them would have been pursued by NVIDIA if I had chosen to allow it.
Over time I ascended in the corporate hierarchy until I became part of a selected group in charge of the design of future GPUs.
I was working with people much more experienced and smarter than me. I didn’t want to get noticed for causing trouble, but for doing a good work. So, predictably my name ended up in several patent applications.
Today, I deeply regret that.
Matt’s Gregory ACC Tessellation Render
A while ago Matt Davidson contacted me to discuss a few issues with the Gregory ACC source code that we released. Turns out there were some issues in the code that determines whether two patches have the same neighbor configuration and therefore can share the stencils. The code that compares the configurations is not very robust and sometimes reports that two patches have the same configuration, while in practice they do not. In retrospect, it would have been a better idea to simply compute the stencils independently and merge them afterwards.
In any case, he managed to work around the problem and posted a video showing the results:
I’m glad that despite the bugs somebody is actually making use of that code and finding it useful! Cool work Matt!
Hemicube Rendering and Integration
I just posted another article about the global illumination solution that I implemented for The Witness, the second one in the series:
- The Witness – Hemicube Rendering and Integration, September 29, 2010
- The Witness – Texture Parameterization, March 30, 2010
Watertight Tessellation: precise and fma
One important feature of Direct3D 11 and OpenGL 4, which I have seen very little written about, is the addition of a new precise
qualifier. It’s no surprise there’s some confusion about the purpose and motivation for this new keyword. The Direct3D documentation is vague about it:
precise affects all results that contribute to the variable’s outcome by preventing the compiler from doing unsafe optimizations. For instance, to improve performance the compiler ingores the possibility of NaN and INF results for floating point variables from constant and stream inputs in order to do several optimizations. By using the precise keyword, these optimizations will be prohibited for all calculations affecting the result of the variable.
Besides the typos (ingores?), it’s not very clear what “unsafe optimizations” actually means, since only a single example is provided. On the other side, the OpenGL specification is a lot more “precise”:
The qualifier “precise” will ensure that operations contributing to a
variable’s value are performed in the order and with the precision
specified in the source code. Order of evaluation is determined by
operator precedence and parentheses, as described in Section 5.
Expressions must be evaluated with a precision consistent with the
operation; for example, multiplying two “float” values must produce a
single value with “float” precision. This effectively prohibits the
arbitrary use of fused multiply-add operations if the intermediate
multiply result is kept at a higher precision.
OpenGL makes it clear that precise causes operations to be performed in the exact order prescribed by the language rules. It also highlights that the precision of the operations must also be exactly as prescribed and not greater, which rules out the use of fused multiply-add if implemented according to the IEEE 754-2008 specification, to which most modern GPUs conform.
Back Online
You may have noticed my blog has been down during the last few months. I had some trouble with the server and lost all my posts. Unfortunately I did not have backups so it took quite some effort to restore them. At least I’ve managed to recover most articles and some of the posts that I wrote last year.
I really didn’t bother to restore the ones in which I only whine about my back pain; I’d rather not remember that part of my life. Some articles still have missing images, and I may update them progressively.
In the meantime I’ve written a couple of articles for The Witness blog and in the future I’ll continue posting most of my technical articles there as well:
- The Witness – Computing Alpha Mipmaps, September 9, 2010
- The Witness – Texture Parameterization, March 30, 2010
Last spring was fantastic. I went hiking almost every single weekend and felt in better shape than ever. I’ve started writing some notes about my hikes in an attempt to plan them better; I don’t think this stuff is very useful to anyone but me, but here it is anyway:
This summer I went to Spain and stayed there 5 weeks, much longer than usual. My grandmother died unexpectedly and it was good to be able to spend some more time with my family. When I got back I realized how lonely I am here in the US, how superficial are all my relationships.
I tried to stay in shape while abroad, but there’s nothing like the DAM workouts. I was flirting with lane 5 when I left and I’m now back to lane 3 again. The first two weeks have been excruciating, but I’m getting back into my routine and hopefully I’ll make my way up again.