You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
p. 6: “composting” to “compositing” (I think one of Jim Blinn’s articles talks about this autocorrection, my favorite in graphics)
p. 8, footnote 4: “These the curves” to “These curves”
p. 18: “HDR processing and disk representations” – I think I’d put “memory representations”, as I think what you mean by “disk representation” is how it’s stored on disk. I had to think about “disk representation” for a while (and I know what RGBE is), wondering if some color experiment with physical disks was meant.
p. 18: “to to” to “to”
p. 25: 3D-LUT is first used but not defined. I would mention in a footnote for this (and for other things in the appendix) that the term is described in the appendix. I marked it as “hmmm, I think I know what this is, but should look it up later” until I ran across it at the end.
p. 25: I was thrown off a bit by the reference to “trim passes” here, not realizing these would be explained a few pages later. I guess you can leave it be, but this sort of problem happens at other places within the document, terms used without definition (I’m safe, I know a fair number of the terms, but I can imagine it losing others). In my dream world you’d have another 40 pages describing all the little “everyone knows this so we’ll gloss over it” bits that I didn’t really follow. That said, there’s always Google, where I can usually find more about some term I don’t know (though not about concepts I don’t follow). If you want, I can reread the doc (it’d do me good) and note places that lost me.
p. 28, caption: “this detail is lost” – true enough, though “this detail is not available in the source image” seems more descriptive. The detail wasn’t lost, it was never really in the source image, right? It was in the scene-referred, but once put into a display-referred image that’s where it’s lost, not when the color corrector is applied. (If I’m wrong, then I’m missing some key point here.)
p. 29: “digital projectional” to “digital projection”
p. 32 and elsewhere: “onset” to “on-set” – maybe it’s common in the film world to see “onset” and think “on the set”, but for me it’s more like “early onset Alzheimer’s” and makes me stumble to interpret it. “on-set” would have been clearer for me. If I google “on set” Hollywood I get 21 million hits, while onset Hollywood gets just 1.5 million.
p. 36: “screen operator” – a footnote saying what this is would be nice, e.g. “Out = 1 – [(1-A) * (1-B)], which simulates the effect of adding light to a portion of the image.” I know only the basics of compositing, i.e. “over”, so hadn’t heard of this operator and had to dig around to find it.
p. 37: “is done at the top of the compositing graph” – I’d change “top” to “beginning”. When I first saw “top” I thought it had to do with what layer was being composited (and normally you composite the bottommost two layers, then add the next layer on top via “over” or whatever, etc.).
p. 37: “was handled done to begin with” – remove “handled” or “done”.
p. 37: “and then you color timing” to “and then you performed color timing”.
p. 38: I like this section on premultiplication – I do wish we had a better term, like “coverage color” or “area color” (still not good…), because this term makes it sound like just some sort of performance hack. I do think it’s a bit odd to say a positive RGB can half a meaningful alpha of 0 – really, whatever’s burning and giving off light (such as fire) does have atoms involved, which therefore cover some tiny portion of the pixel (and which I would think absorb some tiny bit of incoming light behind them). Admittedly, with a tiny alpha you need to jack up the RGBs accordingly for fire.
The problem with a 0 alpha is that the “over” operator would make the flame disappear – alpha of 0 works only with some modified composite operation along the lines of “add”. Which is fine from a practical standpoint, but if you want to think of alpha as some amount of coverage of the pixel, an alpha of 0 means “this doesn’t cover the pixel at all”. Anyway, a super-niggly point, but interesting for a round of beers someday, and we can agree to disagree for now.
p. 40, same paragraph: this was very interesting to me, but I’m left hanging. If we ignore the whole physically-based approach, just to get [0.0-1.0] range CG lighting to work properly we’ll still do gamma correction, meaning we’ll linearize textures before sampling them. I’m not sure what this section, which also considers the whole S-curve tone mapping process, has as a solution. It concludes, “necessitates a color transform that gracefully handles these limits” – what transform is that? I’d “de-gamma”, of course, but beyond that I’m not sure what people do, if anything.
p. 40: “we advocate is to linearize prior to mipmap texture generation” – this is just about a given in the videogames industry, AFAIK. Having your mipmap get darker at the horizon looks bad. Here’s a 4-year-old post, for example, http://filmicgames.com/archives/327, and the oldest reference I know is from 2002: Blow, Jonathan, "Mipmapping, Part 2," Game Developer, vol. 9, no. 1, pp. 16-19, Jan. 2002. It’s been best practice for 12 years for interactive applications (and I know the simulator guys knew of this problem before then), so I’d change this to “the proper way is to …”.
p. 43: the RRT is, then, just some standard S-curve applied to the linear data (which has then already gone through whatever other color corrections were needed, if done in the “filmic” style). It might be worth saying “(standardized S-curve)” somewhere in here, as it would help to know that this is all the RRT is (right? Or is it more?).
By the way, I like the pipeline drawing and explanation here. Having more figures like this, showing the pipeline, inputs, and outputs (and intermediate storage formats used) early on in this document would help us readers a lot, giving us an anchor to refer to when we’re not sure what part of the pipeline is being discussed. The illustration on p. 24 goes a bit in this direction, but I’d like more details (file formats used, for example).
p. 44: “without the user awareness” to “without the users’ awareness”.
p. 45: “it can evaluate to only one of 255 output values” – change to “256”. I’d also change “has more than 255 total pixels” to “… 256 …”. Also, a super-minor point, but in pixel shaders I’m told that passing in the LUT as a uniform array instead of as a texture will lead to faster processing – texture lookup is slow.
p. 46: “only 4 elements to sample 4 locations” – this phrase threw me. 1D-LUTs have 256 entries. How’d we get down to 4 entries along each dimension? I think I’d start out large, noting the massive size of a 256^3 3D-LUT. I’d be interested in knowing what the “sensible” size is for these in the industry – 4 seems unlikely, I’d guess 32 or more.
p. 46: “43 = 64” to “4^3 = 64” – the 3 should be a superscript
p. 46: “such as gamma, brightness, and contrast” – well, gamma’s a 1D-LUT, so I’d certainly expect it to be expressible as a 3D-LUT. I assume the others are, too. So I think I’d put somewhere “Of course, any 1D-LUT can be made into a 3D-LUT” or somesuch.
p. 49: “for color image (color channels” – there is no right parenthesis to match this left one.
p. 49: “per pixels)” to “per pixel)”
The text was updated successfully, but these errors were encountered:
Re: the comment on p. 46: “such as gamma, brightness, and contrast”, I would not make the statement that any 1D-LUT can be made into a 3D-LUT. When a 1D-LUT is applicable, it will more accurately represent the transformation than a 3D-LUT because of the greater sample density. I don't think we should encourage the use of 3D-LUTs where 1D works better.
p. 5: typo, “flip-slide” to “flip-side”
p. 6: “composting” to “compositing” (I think one of Jim Blinn’s articles talks about this autocorrection, my favorite in graphics)
p. 8, footnote 4: “These the curves” to “These curves”
p. 18: “HDR processing and disk representations” – I think I’d put “memory representations”, as I think what you mean by “disk representation” is how it’s stored on disk. I had to think about “disk representation” for a while (and I know what RGBE is), wondering if some color experiment with physical disks was meant.
p. 18: “to to” to “to”
p. 25: 3D-LUT is first used but not defined. I would mention in a footnote for this (and for other things in the appendix) that the term is described in the appendix. I marked it as “hmmm, I think I know what this is, but should look it up later” until I ran across it at the end.
p. 25: I was thrown off a bit by the reference to “trim passes” here, not realizing these would be explained a few pages later. I guess you can leave it be, but this sort of problem happens at other places within the document, terms used without definition (I’m safe, I know a fair number of the terms, but I can imagine it losing others). In my dream world you’d have another 40 pages describing all the little “everyone knows this so we’ll gloss over it” bits that I didn’t really follow. That said, there’s always Google, where I can usually find more about some term I don’t know (though not about concepts I don’t follow). If you want, I can reread the doc (it’d do me good) and note places that lost me.
p. 28, caption: “this detail is lost” – true enough, though “this detail is not available in the source image” seems more descriptive. The detail wasn’t lost, it was never really in the source image, right? It was in the scene-referred, but once put into a display-referred image that’s where it’s lost, not when the color corrector is applied. (If I’m wrong, then I’m missing some key point here.)
p. 29: “digital projectional” to “digital projection”
p. 32 and elsewhere: “onset” to “on-set” – maybe it’s common in the film world to see “onset” and think “on the set”, but for me it’s more like “early onset Alzheimer’s” and makes me stumble to interpret it. “on-set” would have been clearer for me. If I google “on set” Hollywood I get 21 million hits, while onset Hollywood gets just 1.5 million.
p. 36: “screen operator” – a footnote saying what this is would be nice, e.g. “Out = 1 – [(1-A) * (1-B)], which simulates the effect of adding light to a portion of the image.” I know only the basics of compositing, i.e. “over”, so hadn’t heard of this operator and had to dig around to find it.
p. 37: “is done at the top of the compositing graph” – I’d change “top” to “beginning”. When I first saw “top” I thought it had to do with what layer was being composited (and normally you composite the bottommost two layers, then add the next layer on top via “over” or whatever, etc.).
p. 37: “was handled done to begin with” – remove “handled” or “done”.
p. 37: “and then you color timing” to “and then you performed color timing”.
p. 38: I like this section on premultiplication – I do wish we had a better term, like “coverage color” or “area color” (still not good…), because this term makes it sound like just some sort of performance hack. I do think it’s a bit odd to say a positive RGB can half a meaningful alpha of 0 – really, whatever’s burning and giving off light (such as fire) does have atoms involved, which therefore cover some tiny portion of the pixel (and which I would think absorb some tiny bit of incoming light behind them). Admittedly, with a tiny alpha you need to jack up the RGBs accordingly for fire.
The problem with a 0 alpha is that the “over” operator would make the flame disappear – alpha of 0 works only with some modified composite operation along the lines of “add”. Which is fine from a practical standpoint, but if you want to think of alpha as some amount of coverage of the pixel, an alpha of 0 means “this doesn’t cover the pixel at all”. Anyway, a super-niggly point, but interesting for a round of beers someday, and we can agree to disagree for now.
p. 40, same paragraph: this was very interesting to me, but I’m left hanging. If we ignore the whole physically-based approach, just to get [0.0-1.0] range CG lighting to work properly we’ll still do gamma correction, meaning we’ll linearize textures before sampling them. I’m not sure what this section, which also considers the whole S-curve tone mapping process, has as a solution. It concludes, “necessitates a color transform that gracefully handles these limits” – what transform is that? I’d “de-gamma”, of course, but beyond that I’m not sure what people do, if anything.
p. 40: “we advocate is to linearize prior to mipmap texture generation” – this is just about a given in the videogames industry, AFAIK. Having your mipmap get darker at the horizon looks bad. Here’s a 4-year-old post, for example, http://filmicgames.com/archives/327, and the oldest reference I know is from 2002: Blow, Jonathan, "Mipmapping, Part 2," Game Developer, vol. 9, no. 1, pp. 16-19, Jan. 2002. It’s been best practice for 12 years for interactive applications (and I know the simulator guys knew of this problem before then), so I’d change this to “the proper way is to …”.
p. 41: just have to point out an interesting link: http://petapixel.com/2013/02/15/there-are-giant-camera-resolution-test-charts-scattered-across-the-us/
p. 43: the RRT is, then, just some standard S-curve applied to the linear data (which has then already gone through whatever other color corrections were needed, if done in the “filmic” style). It might be worth saying “(standardized S-curve)” somewhere in here, as it would help to know that this is all the RRT is (right? Or is it more?).
By the way, I like the pipeline drawing and explanation here. Having more figures like this, showing the pipeline, inputs, and outputs (and intermediate storage formats used) early on in this document would help us readers a lot, giving us an anchor to refer to when we’re not sure what part of the pipeline is being discussed. The illustration on p. 24 goes a bit in this direction, but I’d like more details (file formats used, for example).
p. 44: “without the user awareness” to “without the users’ awareness”.
p. 45: “it can evaluate to only one of 255 output values” – change to “256”. I’d also change “has more than 255 total pixels” to “… 256 …”. Also, a super-minor point, but in pixel shaders I’m told that passing in the LUT as a uniform array instead of as a texture will lead to faster processing – texture lookup is slow.
p. 46: “only 4 elements to sample 4 locations” – this phrase threw me. 1D-LUTs have 256 entries. How’d we get down to 4 entries along each dimension? I think I’d start out large, noting the massive size of a 256^3 3D-LUT. I’d be interested in knowing what the “sensible” size is for these in the industry – 4 seems unlikely, I’d guess 32 or more.
p. 46: “43 = 64” to “4^3 = 64” – the 3 should be a superscript
p. 46: “such as gamma, brightness, and contrast” – well, gamma’s a 1D-LUT, so I’d certainly expect it to be expressible as a 3D-LUT. I assume the others are, too. So I think I’d put somewhere “Of course, any 1D-LUT can be made into a 3D-LUT” or somesuch.
p. 49: “for color image (color channels” – there is no right parenthesis to match this left one.
p. 49: “per pixels)” to “per pixel)”
The text was updated successfully, but these errors were encountered: