Contact Us CH | EN

Industry News

Home > News > Industry News

Why UHD video looks better than Full HD video, even on a 1080p screen


By JUAN CARLOS BAGNELL on FEBRUARY 9, 2016



I’m a little disheartened by some of the commentary regarding video features like Ultra HD resolution. We’ve had the capability to shoot UHD video on our phones for a couple years now, but people still dismiss the benefits of saving four times more pixels than “Full HD” video. We’re going to dispel some myths, talk about compression, and hopefully change some minds on what higher quality video brings to the table.

Frustrating terminology

Compounding the difficulty in setting the record straight is the obnoxious letter abbreviations for screen and video resolution. “HD” can mean 720p, 1080i, or 1080p. When buying a TV it’s becoming less common to see “HD” on a 720p panel anymore, but when shooting video on your phone, “Full HD” is usually used to represent 1080p. Quad HD (QHD) is four times “HD” (if “HD” is 720p), which is 2560 x 1440, and that’s a popular screen resolution for highend phones now. Ultra HD (UHD) is four times the resolution of “Full HD” 1080p, or 3840 x 2160. UHD video is also sometimes called 4K, even though it doesn’t really reach four thousand pixels across. Moving forward in this article, we’re going to abandon those pesky abbreviations. It’ll make this read a bit repetitive, but from here on we’re only going to use numbers: 1080p, 2160p, etc.

But I only have a 1080p TV/ Phone/ Monitor?

I’m here to tell you that owning an 1080p screen shouldnot deter you from shooting 2160p video. Higher quality video is higher quality video. If you’re using your phone to capture important moments and memories, we should be looking to preserve those memories at the highest possible quality we can. Extending beyond just future-proofing your videos, 2160p will look better on an 1080p display than 1080p video will look on that same 1080p display. This isn’t some psychosomatic mind trick, we have real science and math to back up this claim.

More information per second

First of all, one basic improvement is in the bitrate. Our phones save more information per second when shooting at a higher resolution. Bitrate is a measurement of information saved or processed overtime. This measurement is expressed the same way we talk about your internet speed, in Megabits per second, and abbreviated as Mbps. Saving a higher bitrate makes sense since there are more pixels being stored, but this also translate into more color and contrast for your videos.

We saw a noticeable improvement in 2160p video quality moving from the LG G4 to the LG V10. Why was this? LG doubled the bitrate. The two cameras are incredibly similar in terms of hardware, but doubling the information per second gave us better output. Of course the compromise is that video from the V10 takes up twice as much storage space as video from the G4.

Between different phones, 2160p video is often saved at close to twice the data rate of 1080p video. Bitrate isn’t the full story though, and here’s where things get geeky…

Down sampling color information

Buckle up, video stuff gets funky.

Your TV, computer monitor, or phone screen is made up of pixels, probably an RGB matrix. Red, green, and blue subpixel colored dots makeup every actual pixel on the display. While hardware has three physical subpixels, video files can save color information in a number of different ways. The most commonly used video compression technique is YCbCr. The “Y” represents the luminance value, how bright or dark the image is, but this is only a black and white representation of your shot. The “Cb” is your blue information, and the “Cr” is red. Mixing values between brightness, red, and blue in the video file is how software describes to your display how to use those physical RGB subpixels.

In a perfect world we would have a YCbCr value for every pixel in our video file. This is how high quality professional video files are saved. We call this 4:4:4, which represents a video with no subsampling, and it results in HUGE video files, saving three data values for every pixel in everyframe of footage. It’s information dense for editing, but that’s not what ourphones shoot.

Human eyes are way more sensitive to that “Y” light information than they are the “Cb” blue or “Cr” red info. For consumer video cameras we need to save space and process these videos faster, so we squish the blue and red color information. The way our phones store color info is described as 4:2:0. This means every pixel has brightness info, but only one quarter of the pixels have blue and red info. If we could separate each of these values out, we would see a crisp black and white image, but one quarter resolution images for blue and red.

For 1080p video we lose a lot of clarity when we shoot 4:2:0, but something rad happens when we shoot 2160p and watch it on an 1080p screen.

2160p smartphone video is actually “real”1080p

Our phones still save color in 4:2:0 for 2160p video, but when we watch it on an 1080p screen, the image is squished to fit the lower resolution. Every four pixels in the video file will represent one pixel on the monitor. That means every pixel on the monitor will inherit a YCbCr value. Essentially, when we watch 2160p video on a 1080p screen, we get a 4:4:4 1080p video file.

2160p 4:2:0 video becomes 4:4:4 video when played on a 1080p display

More information per second and full color information for every pixel on an 1080p screen, shooting 2160p is the better solution for clarity today, and those videos will look better as you upgrade to higher resolution screens. A 1080p video won’t look bad on a 1440p (QHD) or 2160p screen, but it won’t look nearly as good as a 2160p video will.

The downside to 2160p?

We mentioned earlier that 2160p video takes up a lot more space. It’s hard to ignore the chilling effect of the base model iPhone6S  for example (which starts users off with less than 16GB of usable storage) when using features like 2160p video recording, but even 32GB of builtin storage on a Galaxy can feel a little claustrophobic if you hit the camera hard. This is absolutely a practical concern for why some people might be selective in how or when they use the 2160p video mode.

There are also situations where it might be more beneficial to use a higher frame rate 1080p mode. I prefer shooting most of my reviews in 1080p at 60 frames per second to show liquidy smooth movement. I think that’s helpful for illustrating how snappy a phone’s UI might be for example. Though soon we’ll be getting phone cameras capable of 2160p at 60fps, but I digress…

Ultimately, when content producers (like the fine team here at Pocketnow, shameless plug) produce higher quality video, it still benefits the viewers who might not be able to playback that video at native resolution. Those viewers are still receiving a higher quality experience, beit a video review at 60fps or a camera comparison at 2160p.

What we need to stop doing though is repeating excuses that don’t fit the math. A lot of people won’t understand the benefits of shooting higher quality video, and I’ve personally run into confusion from a number of people who believe that a higher res file won’t even play on a lower res screen, or it might be distorted. The next time someone “contributes” to a conversation with a witty barb like “but you can’t even see all thepixels when most people will watch on a 1080p screen” you can smile to yourself, confident in the actual knowledge of just how wrong they are.

It’s time folks. If your phone supports 2160p capture, flip the switch. Join us in the wonderful world of higher quality video. I think you’ll like it here…

Screenshot of 1080p video played on a 1080p display.

Screenshot of 2160p video played on a 1080p display.

Crop comparison of 1080p and 2160p video when played on a 1080p display.


.

Shanghai Aipu Waton Electronic Technology(Group)Co.,Ltd