The 2008 contrast peak wasn't just aesthetic - it coincided with widescreen HDTV adoption hitting critical mass (2007-2009). Films were being designed with extreme contrast specifically because they needed to transfer well onto consumer HD displays, which were often badly calibrated and viewed in bright living rooms. Bold lighting with clean edges survived that transition better than subtle gradations.
I worked on Discovery HD docs during that era, and the pipeline forced these choices. Early HD compositing was fragile - After Effects would clip or band easily if contrast wasn't managed carefully. You optimized for what would survive broadcast encoding and consumer displays, not purely cinematic intent.
Then streaming platforms scaled (2010s), and the optimization changed again. Content needed to work across laptop screens, phones, tablets - all with different brightness and viewing conditions. Images that avoid extreme local contrast compress more efficiently at acceptable bitrates. Encoders struggle most with high-frequency detail and sharp edges during motion; soft gradients and gently rolled-off contrast are cheaper to encode.
Your finding that modern films are "lit more gently, not more evenly" captures this perfectly. We're not eliminating directional light, we're compressing the range so it survives compression algorithms and works across inconsistent display environments.
If we're heading toward a content reset around Q2 2027, the question is: what display format are we optimizing for next? Theatrical is weakening, streaming is fragmenting, and micro-dramas on 6-inch portrait screens might be setting the next visual language.
Great data analysis as always! I’m not an expert either, but one thing that makes it hard to analyze completely is that for a long time (1970s?, maybe 80s -to at least the late 90s) features would have to make a low contrast print (or “Lo-con”) print that was used for telecine - for tv and home video use. This was a standard deliverable during this period, at least for indie films. Consequently, if anyone’s analyzing films from that time, you’re likely looking at those low-con print telecines rather than the theatrical or “answer print” versions. Many DPs and directors hated how these Lo-con prints looked and what they did to the original intent of the films. But the distributors required it in order to be “TV safe” for QCing. (Theory being that regular def TVs couldn’t handle the full contrast range.) I think it had a feedback affect, too, which led to DPs intentionally using less contrast in order to start with a TV-safe version (especially if it wasn’t exactly a theatrical release kind of movie). You probably see this issue in TV shows shot on film during that era, too (see LA LAW vs Hill Street Blues, for example, which was revolutionary for its gritty contrast for network drama.) It’s interesting that the contrast graph you have aligns with this period. Sadly, for many films that used Lo-con prints, those are the only versions that exist now. [my experience comes from when I was “post supervisor” on the film that would eventually be called AMERICAN KICKBOXER 2 summer of ‘92. But then making my own 35mm feature OMAHA (the movie) a couple years later. Notably for really low budget indies like that film (co-founding film of Slamdance) we couldn’t afford Lo-con prints - we just did “one-light telecine.
My take, from that same snarky seat regarding framing, first of course is that dynamic range has essentially doubled, so with that contrast is naturally spread out over many stops. Next is the digital post pipeline, even on a laptop now, allows for color grading that 20 years ago would have been impossible. This creates an opportunity to grade by eye, which could easily diminish potential range. Then you have 'on set' knowledge and application. Anymore there are 'DP's' who don't bother to meter anything, yet use their amazing keen eye, to light a scene. Then rely on crew, I guess, to make sure it's OK. I have found a significant difference in lighting proper with a meter and lighting from a monitor in the quality of the image. Monitors can only view in 2D, there are some that argue a light meter doesn't see light the same way a digital camera does. While I imagine, that maybe, that is possible, I also question that logic or science. A monitor will never know the true value of a back light, or each individual light you painted with. So what you get, IMO, is lack of access to the full dynamic range, therefore rendering a 8-10 stop image on a 16 stop palette, potentially. Lastly as with framing, you have execs in the proverbial room, dictating framing and exposures based on phone viewing. Lastly, as for less contrasty, I have noticed higher contrast lately, films that are underexposed, probably by accident, then the black levels lifted, then crushed to avoid noise. This again comes from the highly talented who light all by eye. Granted, now more than ever you can grossly underexpose and salvage an image, you can, as some do, keep bumping the ASA/ISO, instead of lighting a scene. Or just wiggling the F stop, until it appears 'great' on the monitor. All are products of iphonography, amazing camera technology and those who are super talented at lighting without any meters. ;-)
Love this. I have been having the same conversations with friends lately. I felt it especially with del Toro’s Frankenstein. The lighting and grade felt strangely flat and even, like the shadows got ironed out. Curious if anyone else had that reaction, or if I am just noticing it more now
The 2008 contrast peak wasn't just aesthetic - it coincided with widescreen HDTV adoption hitting critical mass (2007-2009). Films were being designed with extreme contrast specifically because they needed to transfer well onto consumer HD displays, which were often badly calibrated and viewed in bright living rooms. Bold lighting with clean edges survived that transition better than subtle gradations.
I worked on Discovery HD docs during that era, and the pipeline forced these choices. Early HD compositing was fragile - After Effects would clip or band easily if contrast wasn't managed carefully. You optimized for what would survive broadcast encoding and consumer displays, not purely cinematic intent.
Then streaming platforms scaled (2010s), and the optimization changed again. Content needed to work across laptop screens, phones, tablets - all with different brightness and viewing conditions. Images that avoid extreme local contrast compress more efficiently at acceptable bitrates. Encoders struggle most with high-frequency detail and sharp edges during motion; soft gradients and gently rolled-off contrast are cheaper to encode.
Your finding that modern films are "lit more gently, not more evenly" captures this perfectly. We're not eliminating directional light, we're compressing the range so it survives compression algorithms and works across inconsistent display environments.
If we're heading toward a content reset around Q2 2027, the question is: what display format are we optimizing for next? Theatrical is weakening, streaming is fragmenting, and micro-dramas on 6-inch portrait screens might be setting the next visual language.
Great data analysis as always! I’m not an expert either, but one thing that makes it hard to analyze completely is that for a long time (1970s?, maybe 80s -to at least the late 90s) features would have to make a low contrast print (or “Lo-con”) print that was used for telecine - for tv and home video use. This was a standard deliverable during this period, at least for indie films. Consequently, if anyone’s analyzing films from that time, you’re likely looking at those low-con print telecines rather than the theatrical or “answer print” versions. Many DPs and directors hated how these Lo-con prints looked and what they did to the original intent of the films. But the distributors required it in order to be “TV safe” for QCing. (Theory being that regular def TVs couldn’t handle the full contrast range.) I think it had a feedback affect, too, which led to DPs intentionally using less contrast in order to start with a TV-safe version (especially if it wasn’t exactly a theatrical release kind of movie). You probably see this issue in TV shows shot on film during that era, too (see LA LAW vs Hill Street Blues, for example, which was revolutionary for its gritty contrast for network drama.) It’s interesting that the contrast graph you have aligns with this period. Sadly, for many films that used Lo-con prints, those are the only versions that exist now. [my experience comes from when I was “post supervisor” on the film that would eventually be called AMERICAN KICKBOXER 2 summer of ‘92. But then making my own 35mm feature OMAHA (the movie) a couple years later. Notably for really low budget indies like that film (co-founding film of Slamdance) we couldn’t afford Lo-con prints - we just did “one-light telecine.
My take, from that same snarky seat regarding framing, first of course is that dynamic range has essentially doubled, so with that contrast is naturally spread out over many stops. Next is the digital post pipeline, even on a laptop now, allows for color grading that 20 years ago would have been impossible. This creates an opportunity to grade by eye, which could easily diminish potential range. Then you have 'on set' knowledge and application. Anymore there are 'DP's' who don't bother to meter anything, yet use their amazing keen eye, to light a scene. Then rely on crew, I guess, to make sure it's OK. I have found a significant difference in lighting proper with a meter and lighting from a monitor in the quality of the image. Monitors can only view in 2D, there are some that argue a light meter doesn't see light the same way a digital camera does. While I imagine, that maybe, that is possible, I also question that logic or science. A monitor will never know the true value of a back light, or each individual light you painted with. So what you get, IMO, is lack of access to the full dynamic range, therefore rendering a 8-10 stop image on a 16 stop palette, potentially. Lastly as with framing, you have execs in the proverbial room, dictating framing and exposures based on phone viewing. Lastly, as for less contrasty, I have noticed higher contrast lately, films that are underexposed, probably by accident, then the black levels lifted, then crushed to avoid noise. This again comes from the highly talented who light all by eye. Granted, now more than ever you can grossly underexpose and salvage an image, you can, as some do, keep bumping the ASA/ISO, instead of lighting a scene. Or just wiggling the F stop, until it appears 'great' on the monitor. All are products of iphonography, amazing camera technology and those who are super talented at lighting without any meters. ;-)
Love this. I have been having the same conversations with friends lately. I felt it especially with del Toro’s Frankenstein. The lighting and grade felt strangely flat and even, like the shadows got ironed out. Curious if anyone else had that reaction, or if I am just noticing it more now