Samsung is without a doubt a pioneer when it comes to technological advancements. Additionally, the improvement in video and photo quality was brought about by the development of high-performance image sensors. while sharing its most recent enhancements and plans for the future. The Tech Day 2022 event was held specifically for various system LSI-related technologies, such as image sensors.
The photos that are taken by smartphones are comparable to those taken by professional digital cameras due to the rapid advancement of image sensor technology. Here, sensor innovation is of quintessence. However, the HDR and AI software that goes along with multi-frame noise reduction and a few other aspects have also been crucial to this dramatic change.
Over the past few decades, numerous applications of cameras have also emerged. It is currently used most frequently for videos, aside from images. With the launch of well-known social media apps, this will become even more compelling. As a result, there is a gap between still images and video, and it appears to be difficult to close that gap.
Smartphone video has not yet been explained
The inclusion of visual noise, particularly in low light, is the primary factor contributing to the lag in smartphone video quality. This is because there isn’t enough exposure time to get enough light at 30 frames per second (fps) in video mode.
While high-dynamic range (HDR) images are the second cause because it’s hard to show an object’s actual colors and the background in complicated light situations. A smartphone’s system-on-chip (SoC) must support multi-exposure and multi-frame fusion for this to work. To be useful, this requires more power and memory.
Finally, there is no depth-sensing capability. The bokeh, or pleasant-looking blur, is one of a DSLR camera’s best and most frequently used features. Though on account of video, this technique isn’t pragmatic for the very reasons that of HDR.
“We decided that we needed to solve these problems with sensors as opposed to software, and we’re taking a three-part approach. First, we’ll be making improvements in light and exposure sensitivity, which has been a big challenge, especially for small and thin smartphone cameras,” said Kim. “Second, to increase the luminance of range we are working on 12-bit and 14-bit sensors, and towards even higher dynamic range sensors for superb HDR. And thirdly, we are developing ToF (Time-of-Flight) sensors that detect true image depth. Our goal is to provide a precise bokeh for smartphone video and other 3D applications.”
Innovative ISOCELL Pixel Techniques
In the past couple of years, significant advancements have been made in the technology that improves pixel quality. so that as much light as possible can be captured. From front-side illumination (FSI) to back-side illumination (BSI), the pixel structure has been acquired.
However, this structure’s disadvantage is that it causes color impurity due to increased crosstalk between pixels.
“To remedy such a drawback, Samsung introduced ISOCELL, its first technology that isolates pixels from each other by adding barriers. The name ISOCELL is a compound word from the words “isolate’ and ‘cell,’” Kim explained. “By isolating each pixel, ISOCELL can increase a pixel’s full well capacity to hold more light and reduce crosstalk from one pixel to another.”
Additionally, the ISOCELL pixel technology had already been completely developed by the time it was first developed. The most recent image sensors are continually receiving new generation enhancements. Considering ISOCELL’s current use of optical walls as an example. Between the color filters, it is made of an innovative, low-refractive material.
In addition to that, Kim shared “We are developing another innovative high refractive nano-structure to utilize the light of adjacent pixels to extreme levels. By applying these nano-photonics technology, we’re achieving high sensitivity that goes beyond the usual limits.”
True Colors with HDR Technology
At around 60 dB dynamic range, conventional image sensors only support 10-bit images. However, sensors need to support at least 14 bits to capture extremely minute details in a variety of situations. Additionally, a long exposure image is used for the dark sections, and a short exposure image is used for the bright sections. Additionally, an HDR image is produced when the two images are combined.
However, there are a few issues with this method for videos. The first is that it uses a lot of power, as it needs to shoot at 60 frames per second to get dual exposure for a 30 fps HDR video. It would also require a lot of power to process such a large amount of image data in real-time.