Canon 7D - what resolution is the video really?
Did you ever get intrigued by a question, spend a good bit of time trying to figure out what the answer is, realize that you can't figure it out, and that the answer is rather meaningless anyway? That's what happened here, but since I already spent an afternoon on it, I thought I'd give you the chance to waste a bit of your time too.
A Swedish company, whose product saves the HDMI output from the Canon 7D with higher Chroma sampling, stirred up a bit of an internet rumpus when someone read their product description and saw trouble.
In noting that the 7D’s HDMI output is 1620 x 910 (rather than 1920 x 1080), they wrote that the HDMI output has the “same crop as the 1080p compressed material on the camera’s memory card.”
This was interpreted to mean - by many - that the 7D was taking the image from the sensor, creating a 1620 x 910 image, and then upscaling that to make the final 1920 x 1080 image.
Much discussion ensued here, and here, and it wasn't like there hadn't been questions about the resolution of the 7D.
I must admit, my first reaction was something along the line of “don’t be ridiculous,” but after a bit of thought I had to wonder: “well, how do they go from the 7D’s 18MP sensor down to a 2MP image?” Could they be going to an image that’s 1620 x 910 first, and if so, how?
I should point out before I go further, that I know just enough about how CMOS sensors and digital cameras work to be dangerous (i.e. everything I know I read in Internet forums.) But I thought I’d do a bit of basic figuring to see if I could figure out what might/or might not, be going on.
A little background about how digital cameras work first:
Binning and Line Skipping
When you have a sensor with more pixels than the final image, one way to read those pixels, and get a “scaled”image faster, is to use binning, and or line skipping. Binning is a readout process in the chip that gives you the combined values of several pixels at once (i.e. 2 pixels, 3 pixels, etc). Line skipping is the process of reading every other, or every third (or whatever divisor you want) row of pixels.
As far as I know, you have to use an even divisor i.e. you can’t bin two and a half pixels, or skip two and a half pixel rows. You have to go by whole pixels. So if that’s what Canon is doing, how would they be doing it? Let’s start with the sensor:
What’s the resolution of the 7D’s sensor?
The 7D’s still images are 5184 x 3456 pixels. The sensor may not be exactly that resolution (it’s not uncommon to exclude pixels right at the edges of the chip) but let’s use that as a starting point. If you use a divisor of 2, you end up with a frame that’s 2592 x 1728, which means you have to do more work to get to 1920 (or 1620). But if you use a divisor of 3, you get an image that’s 1728 x 1152. 1728 is less than 1920, but it’s greater than 1620, which made me think that maybe they are binning 3.
But if they are doing that, they’d have to scale the image again to get to 1620. The alternative would be that they were cropping part of the image to get to 1620.
Are they cropping part of the image?
If you multiply 1620 x 910 x 3, you end up with dimensions of 4860 x 2730. That’s a fairly significant difference, and you’d notice the difference between a still image and a frame from video.
I took a still image and then a frame of video, and then I compared the two; they were almost identical. There was slightly more visible in the still image on the left and right edges, but this was the equivalent of 5 pixels on each side of the final HD image, about 1/10 of what you’d expect to have been cropped off if they were simply scaling by three and cropping the image to fit 1620. Furthermore, the slight difference between the two images might be due to the fact that the video and still image were taken at different apertures.
Which leads me to believe that if they are binning or skipping by 3, then they are scaling it to 1728 and then scaling again to get the 1620 and the 1920 images.
What have I learned?
Absolutely nothing. But it was kind of interesting, wasn't it?
But what about image quality?
Well, comparing a scaled still image (from Photoshop) to the HD image, you see a huge difference.
It’s easily noticeable how much more detail is present in the still image. That's not really a fair test though, because Photoshop can spend a lot more time scaling than the camera can.
Out of curiosity, I also tried scaling the still image to 1728, then back up to 1920 using Photoshop and still ended up with an image with more detail than the “video” frame.
The other big elephant in the room is that when I look at the enlarged HD image I see what looks like the results of heavy compression (particularly around the black text.) So how much of the loss of image quality is due to the compressor, and how much is due to whatever method they are using to down-scale the original image? - I’ll leave that to someone else to figure out.
Other Variables
Since the 7D has a single chip, it uses a Bayer pattern filter to produce a color image. The mixing up of color pixels due to the filter pattern could also have an effect on color accuracy when binning/line skipping.
What about the Canon 5D?
Glad you asked. The Canon 5D seems to produce a slightly better image than the 7D. Interestingly the Canon 5D's chip is slightly larger (5616 x 3744), which corresponds to 1872 wide (if you divide by 3), and just a little closer to 1920, but only 8% larger than 1728.
LINKS
A Swedish company, whose product saves the HDMI output from the Canon 7D with higher Chroma sampling, stirred up a bit of an internet rumpus when someone read their product description and saw trouble.
In noting that the 7D’s HDMI output is 1620 x 910 (rather than 1920 x 1080), they wrote that the HDMI output has the “same crop as the 1080p compressed material on the camera’s memory card.”
This was interpreted to mean - by many - that the 7D was taking the image from the sensor, creating a 1620 x 910 image, and then upscaling that to make the final 1920 x 1080 image.
Much discussion ensued here, and here, and it wasn't like there hadn't been questions about the resolution of the 7D.
I must admit, my first reaction was something along the line of “don’t be ridiculous,” but after a bit of thought I had to wonder: “well, how do they go from the 7D’s 18MP sensor down to a 2MP image?” Could they be going to an image that’s 1620 x 910 first, and if so, how?
I should point out before I go further, that I know just enough about how CMOS sensors and digital cameras work to be dangerous (i.e. everything I know I read in Internet forums.) But I thought I’d do a bit of basic figuring to see if I could figure out what might/or might not, be going on.
A little background about how digital cameras work first:
Binning and Line Skipping
When you have a sensor with more pixels than the final image, one way to read those pixels, and get a “scaled”image faster, is to use binning, and or line skipping. Binning is a readout process in the chip that gives you the combined values of several pixels at once (i.e. 2 pixels, 3 pixels, etc). Line skipping is the process of reading every other, or every third (or whatever divisor you want) row of pixels.
As far as I know, you have to use an even divisor i.e. you can’t bin two and a half pixels, or skip two and a half pixel rows. You have to go by whole pixels. So if that’s what Canon is doing, how would they be doing it? Let’s start with the sensor:
What’s the resolution of the 7D’s sensor?
The 7D’s still images are 5184 x 3456 pixels. The sensor may not be exactly that resolution (it’s not uncommon to exclude pixels right at the edges of the chip) but let’s use that as a starting point. If you use a divisor of 2, you end up with a frame that’s 2592 x 1728, which means you have to do more work to get to 1920 (or 1620). But if you use a divisor of 3, you get an image that’s 1728 x 1152. 1728 is less than 1920, but it’s greater than 1620, which made me think that maybe they are binning 3.
But if they are doing that, they’d have to scale the image again to get to 1620. The alternative would be that they were cropping part of the image to get to 1620.
Are they cropping part of the image?
If you multiply 1620 x 910 x 3, you end up with dimensions of 4860 x 2730. That’s a fairly significant difference, and you’d notice the difference between a still image and a frame from video.
4860 x 2730 pixel frame over actual still frame
I took a still image and then a frame of video, and then I compared the two; they were almost identical. There was slightly more visible in the still image on the left and right edges, but this was the equivalent of 5 pixels on each side of the final HD image, about 1/10 of what you’d expect to have been cropped off if they were simply scaling by three and cropping the image to fit 1620. Furthermore, the slight difference between the two images might be due to the fact that the video and still image were taken at different apertures.
Image captured in still (top) and video (bottom)
Note: the top & bottom of the still image has been cropped to 16:9 aspect ratio to help with comparison, but nothing has been removed from the sides
Which leads me to believe that if they are binning or skipping by 3, then they are scaling it to 1728 and then scaling again to get the 1620 and the 1920 images.
What have I learned?
Absolutely nothing. But it was kind of interesting, wasn't it?
But what about image quality?
Well, comparing a scaled still image (from Photoshop) to the HD image, you see a huge difference.
It’s easily noticeable how much more detail is present in the still image. That's not really a fair test though, because Photoshop can spend a lot more time scaling than the camera can.
Out of curiosity, I also tried scaling the still image to 1728, then back up to 1920 using Photoshop and still ended up with an image with more detail than the “video” frame.
Still (top) and video (bottom)
Notice color and resolution differences
The other big elephant in the room is that when I look at the enlarged HD image I see what looks like the results of heavy compression (particularly around the black text.) So how much of the loss of image quality is due to the compressor, and how much is due to whatever method they are using to down-scale the original image? - I’ll leave that to someone else to figure out.
Other Variables
Since the 7D has a single chip, it uses a Bayer pattern filter to produce a color image. The mixing up of color pixels due to the filter pattern could also have an effect on color accuracy when binning/line skipping.
What about the Canon 5D?
Glad you asked. The Canon 5D seems to produce a slightly better image than the 7D. Interestingly the Canon 5D's chip is slightly larger (5616 x 3744), which corresponds to 1872 wide (if you divide by 3), and just a little closer to 1920, but only 8% larger than 1728.
LINKS
- DSLR Video - how does it work? (techincal) - forums.dpreview.com
- Canon 7D Video Analysis - Tree House Art Club
Comments
To find out how many pixels are actually being recorded simply look at the info in the resulting files. It's a standard MPEG-4 file any any codec information tool should display that info.
As to how the scaling is done, they don't need to group the pixels by 2 or 3; graphics cards have been able to do realtime bilinear and bicubic scaling for several years, and I'm sure the Digic 4 processor can do the same. In other words, each pixel can easily be a weighted average of 2.53 pixels.