For the time being, the overall opinion towards Canon’s latest EOS C200 camcorder has been rather positive and even a bit exaggerated. As with any other camera out there, however, the Canon C200 has its own shortcomings although videographers do seem to really enjoy the newcomer as it provides an amazing feature set for the price. Of course, one of the most appealing features of the device is the ability to record 4096×2160 12-bit Cinema RAW Light up to 30p, as well as 4096×2160 10-bit Cinema RAW Light at 60p.
To seasoned professionals the topic of bit depths still seems to be a bit controversial in a way, so if you are interested to find out for yourself how the available recording options on the C200 stack up against each other, filmmaker Rubidium of Crimson Engine gives you an excellent opportunity as he has put together a short video that demonstrates the true real-world capabilities of the camera when it comes to available recording options.
It’s no secret that currently there are a plenty of consumer-grade video cameras that still shoot at a standard 8-bit depth regardless of the codec available on board. For most applications, this limitation doesn’t seem to be a real deal-breaker for content creators, unless one needs to shoot a documentary or TV broadcast production that specifically requires 10-bit 4:2:2 codec and 50 Mbps bitrate or higher.
The purchase of a more expensive professional camcorder usually deals with these restraints by providing higher bit depth recording options but just because you can shoot at a higher bit depth, doesn’t necessarily mean that you should always do, right? With that in mind, let’s take a closer look at the screen grabs below.
Starting with the 12-bit video shot in RAW, you can see that the overall image looks much more natural, sharper and true-to-life compared to the other footage shot in lower bit-depths. The blues seem more saturated and life-like in the picture, the yellow orbs of light in the background are warmer, while the distinct colors of the skin still manage to blend in a way that doesn’t ‘smudge’ the image. All of this is attributed to the fact that the higher bit depth retains all of the color information captured by the sensor.
While looking at the 10-bit video which was recorded to an external recorder in Apple ProRes 422, though, there is a subtle difference between the 12 bit and 10-bit images. While the 12-bit footage provides a vivid and accurate representation of the blue and yellow colors in the background, the 10-bit video seems to display those colors in a somewhat desaturated manner.
Even though the color patterns are less distinguished, the color palette can be easily improved with a bit of color grading. As for the skin tones, the colors still blend well together although their intensity that was apparent in the 12-bit footage seems to have been lost due to the lower color depth of the 10-bit recording.
Finally, when analyzing the traditional 8-bit video shot at MP4 4:2:0, the difference becomes even more evident when compared to the other shots set to record at higher bit depths. In the 8-bit 4:2:0 footage, the colors tend to be a lot more flat in comparison. There doesn’t seem to be much distinction between the saturation of the yellow orbs and blue shelves/drawers as well. The colors on the face somehow blend into a single overall color with the spots of ‘whiteness’ and ‘redness’ across the board.
So, apparently, there is indeed a difference between shooting with different bit depths in terms of color fidelity and tonal rendition. As for which is the best, the award goes to the 12-bit Raw recording although it’s important to note that with higher bit depths come more resource intensive codecs, and with these codecs come even larger file sizes. While the 12-bit Canon RAW Light provides the highest image quality, you should figure out what bit depth will work for you on a project-to-project basis and whether or not it would be a feasible option to shoot Raw for any of your productions.
[source: Crimson Engine]
B&H Order Link:
Canon EOS C200 EF Cinema Camera
Disclaimer: As an Amazon Associate partner and participant in B&H and Adorama Affiliate programmes, we earn a small comission from each purchase made through the affiliate links listed above at no additional cost to you.
Claim your copy of DAVINCI RESOLVE - SIMPLIFIED COURSE. Get Instant Access!
The most staggering difference is when you will need to process the footage if you need to lift up some shadows, do some keying or color correction.
In a perfect world where all your footage is perfectly exposed with a perfect white balance, 8 bit can be enough….
sorry but I call BS on this video. why ? well if you switched codecs you would have to stop recording. there would be a jump cut. here, clearly its a continuous cut. even if you did morph cuts you can usually see them. instead I think they simply simulated the looks by dialing down sharpness and / or doing channel blurs on R and B.
of course YT compresses to 8bit 4.2.0 h.264. no idea what they uploaded although YT will accept ProRes. That would save a recompression step.
either way the lack of jump cuts when changing codecs says not actual codecs.
Steve,
The user made this video by using the the camera’s multiple record options, Internal the camera will record RAW at 12 bits and at the same time to an different slot an 8 bit MP4 and he also used an external prores recorder to make a 10 bit 422 file , he then synced the footage and you have the result
so it was done at the same time.
The file that you can record at the SD card at the same time that you shoot RAW on the CFast is Proxy. so it doesn’t have the same amount of information that the file that you shoot directly to SD card.
Rubidium?
What resolution are you shooting your 10-bit 422 to an external?