587

I'm looking for some kind of formula or algorithm to determine the brightness of a color given the RGB values. I know it can't be as simple as adding the RGB values together and having higher sums be brighter, but I'm kind of at a loss as to where to start.

4
  • 12
    Perceived brightness is what I think I'm looking for, thank you. Commented Feb 27, 2009 at 19:34
  • 1
    I found [this code][1] (written in C#) that does an excellent job of calculating the "brightness" of a color. In this scenario, the code is trying to determine whether to put white or black text over the color. [1]:nbdtech.com/Blog/archive/2008/04/27/… Commented May 26, 2010 at 19:34
  • 2
    There is a good article (Manipulating colors in .NET - Part 1) about color spaces and conversations between them including both theory and the code (C#). For the answer look at Conversion between models topic in the article. Commented Jun 3, 2013 at 15:43
  • See my answer, but really simple is: brightness = 0.2*r + 0.7*g + 0.1*b Commented Dec 11, 2022 at 19:08

24 Answers 24

626

The method could vary depending on your needs. Here are 3 ways to calculate Luminance:

  • Luminance (standard for certain colour spaces): (0.2126*R + 0.7152*G + 0.0722*B) source img

  • Luminance (perceived option 1): (0.299*R + 0.587*G + 0.114*B) source img

  • Luminance (perceived option 2, slower to calculate): sqrt( 0.241*R^2 + 0.691*G^2 + 0.068*B^2 )sqrt( 0.299*R^2 + 0.587*G^2 + 0.114*B^2 ) (thanks to @MatthewHerbst) source img

[Edit: added examples using named css colors sorted with each method.]

Sign up to request clarification or add additional context in comments.

20 Comments

Note that both of these emphasize the physiological aspects: the human eyeball is most sensitive to green light, less to red and least to blue.
Note also that all of these are probably for linear 0-1 RGB, and you probably have gamma-corrected 0-255 RGB. They are not converted like you think they are.
Not correct. Before applying the linear transformation, one must first apply the inverse of the gamma function for the color space. Then after applying the linear function, the gamma function is applied.
In the last formula, is it (0.299*R)^2 or is it 0.299*(R^2) ?
@KaizerSozay As it's written here it would mean 0.299*(R^2) (because exponentiation goes before multiplication)
|
362
+100

The "Accepted" Answer is Incorrect and Incomplete

The only answers that are accurate are the @jive-dadson and @EddingtonsMonkey answers, and in support @nils-pipenbrinck. The other answers (including the accepted) are linking to or citing sources that are either wrong, irrelevant, obsolete, or broken.

Briefly:

  • sRGB must be LINEARIZED before applying the coefficients.
  • Luminance (L or Y) is linearly additive as is light.
  • Perceived lightness (L*) is nonlinear to light as is human perception.
  • HSV and HSL are not even remotely accurate in terms of perception. (Not perceptually uniform).
  • The IEC standard for sRGB specifies a threshold of 0.04045 it is NOT 0.03928 (that was from an obsolete early draft).
  • To be useful (i.e. relative to perception), Euclidian distances require a perceptually uniform Cartesian vector space such as CIELAB. sRGB is not one.

What follows is a correct and complete answer:

Because this thread appears highly in search engines, I am adding this answer to clarify the various misconceptions on the subject.

Luminance is a linear measure of light, spectrally weighted for normal vision but not adjusted for the non-linear perception of lightness. It can be a relative measure, Y as in CIEXYZ, or as L, an absolute measure in cd/m2 (not to be confused with L*).

Perceived lightness is used by some vision models such as CIELAB, here L* (Lstar) is a value of perceptual lightness, and is non-linear relative to light, to approximate the human vision non-linear response curve. (That is, linear to perception but therefore non-linear to light).

Brightness is a perceptual attribute, it does not have a "physical" measure. However some color appearance models do have a value, usually denoted as Q for perceived brightness, which is different than perceived lightness. Note: Brightness is not to be confused with luminous energy, which is also denoted with Q, but is not necessarily calculated the same as brightness for a given color appearance model.

Luma is a gamma encoded and spectrally weighted signal used in some video encodings (such as Y´I´Q´ or Y´U´V´). The prime symbol after the Y () indicates the data is gamma encoded. Luma is not to be confused with linear luminance (Y or L per above).

Weighting means the digital values for each of the red, green, or blue primary colors in a display are adjusted so that an equal value of red, green, and blue results in a neutral gray or white. This is defined in the given color space, such as BT.709. in other words, 255 blue is much darker than 255 green, in accordance with the luminous efficiency function.

Gamma or transfer curve (TRC) is a curve that is often similar to the perceptual curve, and is commonly applied to image data for storage or broadcast to reduce perceived noise and/or improve data utilization (and related reasons).

In the early days of electronic imaging, it was often referred to as Gamma Correction.


Solution: To estimate perceived lightness

(This assumes conditions close to the reference environment)

  • First convert gamma encoded R´G´B´ image values to linear luminance ( Y ).
    • linearize each R´G´B´ value to linear RGB
    • apply the weighting coefficients to each R, G, B value.
    • sum the results to get Y.
  • Then convert Y to non-linear perceived lightness (L*).

TO FIND LUMINANCE:

...Because apparently it was lost somewhere...

Step One:

Convert all sRGB 8 bit integer values to decimal 0.0-1.0

  vR = sR / 255;
  vG = sG / 255;
  vB = sB / 255;

Step Two:

Convert a gamma encoded RGB to a linear value. sRGB (computer standard) for instance requires a power curve of approximately V^2.2, though the "accurate" transform is:

sRGB to Linear

Where V´ is the gamma-encoded R, G, or B channel of sRGB.
Pseudocode:

function sRGBtoLin(colorChannel) {
        // Send this function a decimal sRGB gamma encoded color value
        // between 0.0 and 1.0, and it returns a linearized value.

    if ( colorChannel <= 0.04045 ) {
            return colorChannel / 12.92;
        } else {
            return pow((( colorChannel + 0.055)/1.055),2.4);
        }
    }

Step Three:

To find Luminance (Y) apply the standard coefficients for sRGB:

Apply coefficients Y = R * 0.2126 + G * 0.7152 + B *  0.0722

Pseudocode using above functions:

Y = (0.2126 * sRGBtoLin(vR) + 0.7152 * sRGBtoLin(vG) + 0.0722 * sRGBtoLin(vB))

TO FIND PERCEIVED LIGHTNESS:

Step Four:

Take luminance Y from above, and transform to L*

L* from Y equation

Pseudocode:

function YtoLstar(Y) {
        // Send this function a luminance value between 0.0 and 1.0,
        // and it returns L* which is "perceptual lightness"

    if ( Y <= (216/24389)) {       // The CIE standard states 0.008856 but 216/24389 is the intent for 0.008856451679036
            return Y * (24389/27);  // The CIE standard states 903.3, but 24389/27 is the intent, making 903.296296296296296
        } else {
            return pow(Y,(1/3)) * 116 - 16;
        }
    }

L* is a value from 0 (black) to 100 (white) where 50 is the perceptual "middle grey". L* = 50 is the equivalent of Y = 18.4, or in other words an 18% grey card, representing the middle of a photographic exposure (Ansel Adams zone V).


2023 Edit:

Adds by Myndex in an effort toward completeness:

Some of the other answers on this page are showing the math for Luma ( in other words Y prime) and luma should never be confused with relative luminance (Y). And neither is perceptually uniform lightness nor brightness.

  • Luma, while it is gamma compressed and may be useful for encoding image data for later decoding, is not perceptually uniform especially for more saturated colors. As such, Luma is generally not a good metric for perceived lightness.
  • Luminance is linear per light, but not linear per perception, in other words, it is not perceptually uniform, and therefore not useful for predicting lightness perception.
  • Lstar L* in my answer above is often used for predicting a perceived lightness. It is based on Munsell Value, which is derived from experiments in a specifically defined environment using large diffuse color samples.
    • L* is not particularly context sensitive, i.e. not sensitive to things like simultaneous contrast, HK, or other contextual sensations.
    • Therefore, L* can only take us part way to an accurate model of perceived lightness.
    • L* along with CIELAB is good for determining "small differences" between two items, i.e. close to the "Just Noticeable Difference" (JND threshold).
      • L* becomes less accurate at larger supra-threshold levels and large differences, where other contextual factors begin to have a more substantial effect.
      • Supra-threshold curve shapes are different than the JND threshold curve.
    • L* puts perceptual middle grey at 18%.
      • Actual perceived middle grey depends on context.
      • Middle contrast for high spatial frequency stimuli (such as text) is often much higher than 18%.
      • Contrast perception is more complicated than the difference between two lightness values, and simple difference or ratios are of limited utility, and not uniform over the visual range.

Additional References:

IEC 61966-2-1:1999 Standard
sRGB
CIELAB
CIEXYZ
Charles Poynton's Gamma FAQ

Reference links in the above text:

Luminance
Lightness
Lightness vs Brightness at CIE
Luminous energy
Luma
YIQ
YUV and others in the book Video Demystified
Rec._709
Luminous efficiency function
Gamma correction

26 Comments

I created a demonstration comparing BT.601 Luma and CIE 1976 L* Perceptual Gray, using few MATLAB commands: Luma=rgb2gray(RGB);LAB=rgb2lab(RGB);LAB(:,:,2:3)=0;PerceptualGray=lab2rgb(LAB);
@asdfasdfads Yes, L*a*b* does not take into account a number of psychophysical attributes. Helmholtz-Kohlrausch effect is one, but there are many others. CIELAB is not a "full" image assessment model by any means. In my post I was trying to cover the basic concepts as completely as possible without venturing into the very deep minutiae. The Hunt model, Fairchild's models, and others do a more complete job, but are also substantially more complex.
Thank your for an excellent answer. One slight quibble: your function YtoLstar(Y) takes a value in the range 0 to 1, and returns one in the range 0 to 100, which is a bit confusing.
Hi @ChrisDennis thank you — so, commonly, Y is a 0.0-1.0 range, and L* is commonly 0-100. Sometimes Y is normalized to 0-100, in which case it must be shifted to 0.0-1.0 before applying the exponents for the power curve. L* is almost always a 0-100 range, and the ab color values are often ±128. L* is based/derived from Munsell value, which is 0-10 ... So, as for confusing, wait till you dig deeper into colorimetry—"confusing" is par for the course. (!!)
I converted this answer to TypeScript: gist.github.com/mnpenner/70ab4f0836bbee548c71947021f93607
|
357

I think what you are looking for is the RGB -> Luma conversion formula.

Photometric/digital ITU BT.709:

Y = 0.2126 R + 0.7152 G + 0.0722 B

Digital ITU BT.601 (gives more weight to the R and B components):

Y = 0.299 R + 0.587 G + 0.114 B

If you are willing to trade accuracy for perfomance, there are two approximation formulas for this one:

Y = 0.33 R + 0.5 G + 0.16 B

Y = 0.375 R + 0.5 G + 0.125 B

These can be calculated quickly as

Y = (R+R+B+G+G+G)/6

Y = (R+R+R+B+G+G+G+G)>>3

5 Comments

How come your 'calculated quickly' values don't include blue in the approximation at all?
@Jonathan Dumaine - the two quick calculation formulas both include blue - 1st one is (2*Red + Blue + 3*Green)/6, 2nd one is (3*Red + Blue + 4*Green)>>3. granted, in both quick approximations, Blue has the lowest weight, but it's still there.
The quick version is even faster if you do it as: Y = (R<<1+R+G<<2+B)>>3 (thats only 3-4 CPU cycles on ARM) but I guess a good compiler will do that optimisation for you.
@rjmunro I believe you are missing a few parenthesis, the addition operations will go first, so your code is the same as: Y = (R << (1 + R + G) << (2 + B)) >> 3. The correct snippet: int Y = ((R << 1) + R + (G << 2) + B) >> 3;.
Luma ( in otherwords Y prime) is not luminance (Y), and luma, while it is gamma compressed and may be useful for encoding image data, for later decoding, is not perceptually uniform especially for more saturated colors. As such, it is not a good metric for perceived lightness. Luminance is linear, and not perceptually uniform, and also not useful for predicting lightness perception.
124

I have made comparison of the three algorithms in the accepted answer. I generated colors in cycle where only about every 400th color was used. Each color is represented by 2x2 pixels, colors are sorted from darkest to lightest (left to right, top to bottom).

1st picture - Luminance (relative)

0.2126 * R + 0.7152 * G + 0.0722 * B

2nd picture - http://www.w3.org/TR/AERT#color-contrast

0.299 * R + 0.587 * G + 0.114 * B

3rd picture - HSP Color Model

sqrt(0.299 * R^2 + 0.587 * G^2 + 0.114 * B^2)

4th picture - WCAG 2.0 SC 1.4.3 relative luminance and contrast ratio formula (see @Synchro's answer here)

Pattern can be sometimes spotted on 1st and 2nd picture depending on the number of colors in one row. I never spotted any pattern on picture from 3rd or 4th algorithm.

If i had to choose i would go with algorithm number 3 since its much easier to implement and its about 33% faster than the 4th.

Perceived brightness algorithm comparison

3 Comments

Your comparison image is incorrect because you did not provide the correct input to all of the functions. The first function requires linear RGB input; I can only reproduce the banding effect by providing nonlinear (i.e. gamma-corrected) RGB. Correcting this issue, you get no banding artifacts and the 1st function is the clear winner.
@Max the ^2 and sqrt included in the third formula are a quicker way of approximating linear RGB from non-linear RGB instead of the ^2.2 and ^(1/2.2) that would be more correct. Using nonlinear inputs instead of linear ones is extremely common unfortunately.
Did you normalize RGB each to {0, 1} or did you keep them at {0, 255}?
82

Below is the only CORRECT algorithm for converting sRGB images, as used in browsers etc., to grayscale.

It is necessary to apply an inverse of the gamma function for the color space before calculating the inner product. Then you apply the gamma function to the reduced value. Failure to incorporate the gamma function can result in errors of up to 20%.

For typical computer stuff, the color space is sRGB. The right numbers for sRGB are approx. 0.21, 0.72, 0.07. Gamma for sRGB is a composite function that approximates exponentiation by 1/(2.2). Here is the whole thing in C++.

// sRGB luminance(Y) values
const double rY = 0.212655;
const double gY = 0.715158;
const double bY = 0.072187;

// Inverse of sRGB "gamma" function. (approx 2.2)
double inv_gam_sRGB(int ic) {
    double c = ic/255.0;
    if ( c <= 0.04045 )
        return c/12.92;
    else 
        return pow(((c+0.055)/(1.055)),2.4);
}

// sRGB "gamma" function (approx 2.2)
int gam_sRGB(double v) {
    if(v<=0.0031308)
      v *= 12.92;
    else 
      v = 1.055*pow(v,1.0/2.4)-0.055;
    return int(v*255+0.5); // This is correct in C++. Other languages may not
                           // require +0.5
}

// GRAY VALUE ("brightness")
int gray(int r, int g, int b) {
    return gam_sRGB(
            rY*inv_gam_sRGB(r) +
            gY*inv_gam_sRGB(g) +
            bY*inv_gam_sRGB(b)
    );
}

6 Comments

Why did you use a composite function to approximate the exponent? Why not just do a direct calculation? Thanks
That is just the way sRGB is defined. I think the reason is that it avoids some numerical problems near zero. It would not make much difference if you just raised the numbers to the powers of 2.2 and 1/2.2.
JMD - as part of work in a visual perception lab, I have done direct luminance measurements on CRT monitors and can confirm that there is a linear region of luminance at the bottom of the range of values.
I know this is very old, but its still out there to be searched. I don't think it can be correct. Shouldn't gray(255,255,255) = gray(255,0,0)+gray(0,255,0)+gray(0,0,255)? It doesn't.
@DCBillen: no, since the values are in non-linear gamma-corrected sRGB space, you can't just add them up. If you wanted to add them up, you should do so before calling gam_sRGB.
|
16

Rather than getting lost amongst the random selection of formulae mentioned here, I suggest you go for the formula recommended by W3C standards.

Here's a straightforward but exact PHP implementation of the WCAG 2.0 SC 1.4.3 relative luminance and contrast ratio formulae. It produces values that are appropriate for evaluating the ratios required for WCAG compliance, as on this page, and as such is suitable and appropriate for any web app. This is trivial to port to other languages.

/**
 * Calculate relative luminance in sRGB colour space for use in WCAG 2.0 compliance
 * @link http://www.w3.org/TR/WCAG20/#relativeluminancedef
 * @param string $col A 3 or 6-digit hex colour string
 * @return float
 * @author Marcus Bointon <[email protected]>
 */
function relativeluminance($col) {
    //Remove any leading #
    $col = trim($col, '#');
    //Convert 3-digit to 6-digit
    if (strlen($col) == 3) {
        $col = $col[0] . $col[0] . $col[1] . $col[1] . $col[2] . $col[2];
    }
    //Convert hex to 0-1 scale
    $components = array(
        'r' => hexdec(substr($col, 0, 2)) / 255,
        'g' => hexdec(substr($col, 2, 2)) / 255,
        'b' => hexdec(substr($col, 4, 2)) / 255
    );
    //Correct for sRGB
    foreach($components as $c => $v) {
        if ($v <= 0.04045) {
            $components[$c] = $v / 12.92;
        } else {
            $components[$c] = pow((($v + 0.055) / 1.055), 2.4);
        }
    }
    //Calculate relative luminance using ITU-R BT. 709 coefficients
    return ($components['r'] * 0.2126) + ($components['g'] * 0.7152) + ($components['b'] * 0.0722);
}

/**
 * Calculate contrast ratio acording to WCAG 2.0 formula
 * Will return a value between 1 (no contrast) and 21 (max contrast)
 * @link http://www.w3.org/TR/WCAG20/#contrast-ratiodef
 * @param string $c1 A 3 or 6-digit hex colour string
 * @param string $c2 A 3 or 6-digit hex colour string
 * @return float
 * @author Marcus Bointon <[email protected]>
 */
function contrastratio($c1, $c2) {
    $y1 = relativeluminance($c1);
    $y2 = relativeluminance($c2);
    //Arrange so $y1 is lightest
    if ($y1 < $y2) {
        $y3 = $y1;
        $y1 = $y2;
        $y2 = $y3;
    }
    return ($y1 + 0.05) / ($y2 + 0.05);
}

5 Comments

The W3C formula is incorrect on a number of levels. It is not taking human perception into account, they are using "simple" contrast using luminance which is linear and not at all perceptually uniform. Among other things, it appears they based it on some standards as old as 1988 (!!!) which are not relevant today (those standards were based on monochrome monitors such as green/black, and referred to the total contrast from on to off, not considering greyscale nor colors).
That’s complete rubbish. Luma is specifically perceptual - that’s why it has different coefficients for red, green, and blue. Age has nothing to do with it - the excellent CIE Lab perceptual colour space dates from 1976. The W3C space isn’t as good, however it is a good practical approximation that is easy to calculate. If you have something constructive to offer, post that instead of empty criticism.
@Syncro no, luma is a GAMMA ENCODED (Y´) part of some video encodings (such as NTSC's YIQ). Luminance, i.e. Y as in CIEXYZ is LINEAR, and not at all perceptual. the W3C are using linear luminance, and simple contrast, which does not properly define contrast in the mid range (it is way off). Writing an article on this right now, I'll post the link when complete. Yes, CIELAB is excellent, but W3C ARE NOT USING IT. The outdated doc I am referring to is ANSI-HFES-100-1988, and not appropriate for on-screen color contrasts.
Just to add/update: we are currently researching replacement algorithms that better model perceptual contrast (discussion in Github Issue 695). However, as a separate issue FYI the threshold for sRGB is 0.04045, and not 0.03928 which was referenced from an obsolete early sRGB draft. The authoritative IEC std uses 0.04045 and a pull request is forthcoming to correct this error in the WCAG. (ref: IEC 61966-2-1:1999) This is in Github issue 360, though to mention, in 8bit there is no actual difference — near end of thread 360 I have charts of errors including 0.04045/0.03928 in 8bit.
And to add to the thread, the replacement method for WCAG 3.0 is APCA, and can be seen at myndex.com/APCA/simple
15

Consider this an addendum to Myndex's excellent answer. As he (and others) explain, the algorithms for calculating the relative luminance (and the perceptual lightness) of an RGB colour are designed to work with linear RGB values. You can't just apply them to raw sRGB values and hope to get the same results.

Well that all sounds great in theory, but I really needed to see the evidence for myself, so, inspired by Petr Hurtak's colour gradients, I went ahead and made my own. They illustrate the two most common algorithms (ITU-R Recommendation BT.601 and BT.709), and clearly demonstrate why you should do your calculations with linear values (not gamma-corrected ones).

Firstly, here are the results from the older ITU BT.601 algorithm. The one on the left uses raw sRGB values. The one on the right uses linear values.

ITU-R BT.601 colour luminance gradients

0.299 R + 0.587 G + 0.114 B

ITU-R BT.601 colour luminance gradients

At this resolution, you can see that many neighbouring pixels on the left aren't even close to the same brightness—in the top half, there are bright red pixels next to very dark ones. At a higher resolution, unwanted patterns start to appear:

ITU-R BT.601 colour luminance gradients (high res)

The linear one doesn't suffer from these, but there's quite a lot of noise there. Let's compare it to ITU-R Recommendation BT.709…

ITU-R BT.709 colour luminance gradients

0.2126 R + 0.7152 G + 0.0722 B

ITU-R BT.709 colour luminance gradients

Oh boy. Clearly not intended to be used with raw sRGB values! And yet, that's exactly what most people do!

ITU-R BT.709 colour luminance gradients (high-res)

At high-res, you can really see how effective this algorithm is when using linear values. It doesn't have nearly as much noise as the earlier one. While none of these algorithms are perfect, this one is about as good as it gets.

Comments

13

To add what all the others said:

All these equations work kinda well in practice, but if you need to be very precise you have to first convert the color to linear color space (apply inverse image-gamma), do the weight average of the primary colors and - if you want to display the color - take the luminance back into the monitor gamma.

The luminance difference between ingnoring gamma and doing proper gamma is up to 20% in the dark grays.

Comments

9

Interestingly, this formulation for RGB=>HSV just uses v=MAX3(r,g,b). In other words, you can use the maximum of (r,g,b) as the V in HSV.

I checked and on page 575 of Hearn & Baker this is how they compute "Value" as well.

From Hearn&Baker pg 319

4 Comments

Just for the record the link is dead, archive version here - web.archive.org/web/20150906055359/http://…
HSV is not perceptually uniform (and it isn't even close). It is used only as a "convenient" way to adjust color, but it is not relevant to perception, and the V does not relate to the true value of L or Y (CIE Luminance).
Does that mean #FF0000 is as bright as #FFFFFF?
I believe it's more like lightness = (max(r, g, b) + min(r, g, b)) / 2 in HSL
8

I was solving a similar task today in javascript. I've settled on this getPerceivedLightness(rgb) function for a HEX RGB color. It deals with Helmholtz-Kohlrausch effect via Fairchild and Perrotta formula for luminance correction.

/**
 * Converts RGB color to CIE 1931 XYZ color space.
 * https://www.image-engineering.de/library/technotes/958-how-to-convert-between-srgb-and-ciexyz
 * @param  {string} hex
 * @return {number[]}
 */
export function rgbToXyz(hex) {
    const [r, g, b] = hexToRgb(hex).map(_ => _ / 255).map(sRGBtoLinearRGB)
    const X =  0.4124 * r + 0.3576 * g + 0.1805 * b
    const Y =  0.2126 * r + 0.7152 * g + 0.0722 * b
    const Z =  0.0193 * r + 0.1192 * g + 0.9505 * b
    // For some reason, X, Y and Z are multiplied by 100.
    return [X, Y, Z].map(_ => _ * 100)
}

/**
 * Undoes gamma-correction from an RGB-encoded color.
 * https://en.wikipedia.org/wiki/SRGB#Specification_of_the_transformation
 * https://stackoverflow.com/questions/596216/formula-to-determine-brightness-of-rgb-color
 * @param  {number}
 * @return {number}
 */
function sRGBtoLinearRGB(color) {
    // Send this function a decimal sRGB gamma encoded color value
    // between 0.0 and 1.0, and it returns a linearized value.
    if (color <= 0.04045) {
        return color / 12.92
    } else {
        return Math.pow((color + 0.055) / 1.055, 2.4)
    }
}

/**
 * Converts hex color to RGB.
 * https://stackoverflow.com/questions/5623838/rgb-to-hex-and-hex-to-rgb
 * @param  {string} hex
 * @return {number[]} [rgb]
 */
function hexToRgb(hex) {
    const match = /^#?([a-f\d]{2})([a-f\d]{2})([a-f\d]{2})$/i.exec(hex)
    if (match) {
        match.shift()
        return match.map(_ => parseInt(_, 16))
    }
}

/**
 * Converts CIE 1931 XYZ colors to CIE L*a*b*.
 * The conversion formula comes from <http://www.easyrgb.com/en/math.php>.
 * https://github.com/cangoektas/xyz-to-lab/blob/master/src/index.js
 * @param   {number[]} color The CIE 1931 XYZ color to convert which refers to
 *                           the D65/2° standard illuminant.
 * @returns {number[]}       The color in the CIE L*a*b* color space.
 */
// X, Y, Z of a "D65" light source.
// "D65" is a standard 6500K Daylight light source.
// https://en.wikipedia.org/wiki/Illuminant_D65
const D65 = [95.047, 100, 108.883]
export function xyzToLab([x, y, z]) {
  [x, y, z] = [x, y, z].map((v, i) => {
    v = v / D65[i]
    return v > 0.008856 ? Math.pow(v, 1 / 3) : v * 7.787 + 16 / 116
  })
  const l = 116 * y - 16
  const a = 500 * (x - y)
  const b = 200 * (y - z)
  return [l, a, b]
}

/**
 * Converts Lab color space to Luminance-Chroma-Hue color space.
 * http://www.brucelindbloom.com/index.html?Eqn_Lab_to_LCH.html
 * @param  {number[]}
 * @return {number[]}
 */
export function labToLch([l, a, b]) {
    const c = Math.sqrt(a * a + b * b)
    const h = abToHue(a, b)
    return [l, c, h]
}

/**
 * Converts a and b of Lab color space to Hue of LCH color space.
 * https://stackoverflow.com/questions/53733379/conversion-of-cielab-to-cielchab-not-yielding-correct-result
 * @param  {number} a
 * @param  {number} b
 * @return {number}
 */
function abToHue(a, b) {
    if (a >= 0 && b === 0) {
        return 0
    }
    if (a < 0 && b === 0) {
        return 180
    }
    if (a === 0 && b > 0) {
        return 90
    }
    if (a === 0 && b < 0) {
        return 270
    }
    let xBias
    if (a > 0 && b > 0) {
        xBias = 0
    } else if (a < 0) {
        xBias = 180
    } else if (a > 0 && b < 0) {
        xBias = 360
    }
    return radiansToDegrees(Math.atan(b / a)) + xBias
}

function radiansToDegrees(radians) {
    return radians * (180 / Math.PI)
}

function degreesToRadians(degrees) {
    return degrees * Math.PI / 180
}

/**
 * Saturated colors appear brighter to human eye.
 * That's called Helmholtz-Kohlrausch effect.
 * Fairchild and Pirrotta came up with a formula to
 * calculate a correction for that effect.
 * "Color Quality of Semiconductor and Conventional Light Sources":
 * https://books.google.ru/books?id=ptDJDQAAQBAJ&pg=PA45&lpg=PA45&dq=fairchild+pirrotta+correction&source=bl&ots=7gXR2MGJs7&sig=ACfU3U3uIHo0ZUdZB_Cz9F9NldKzBix0oQ&hl=ru&sa=X&ved=2ahUKEwi47LGivOvmAhUHEpoKHU_ICkIQ6AEwAXoECAkQAQ#v=onepage&q=fairchild%20pirrotta%20correction&f=false
 * @return {number}
 */
function getLightnessUsingFairchildPirrottaCorrection([l, c, h]) {
    const l_ = 2.5 - 0.025 * l
    const g = 0.116 * Math.abs(Math.sin(degreesToRadians((h - 90) / 2))) + 0.085
    return l + l_ * g * c
}

export function getPerceivedLightness(hex) {
    return getLightnessUsingFairchildPirrottaCorrection(labToLch(xyzToLab(rgbToXyz(hex))))
}

Comments

6

Here's a bit of C code that should properly calculate perceived luminance.

// reverses the rgb gamma
#define inverseGamma(t) (((t) <= 0.0404482362771076) ? ((t)/12.92) : pow(((t) + 0.055)/1.055, 2.4))

//CIE L*a*b* f function (used to convert XYZ to L*a*b*)  http://en.wikipedia.org/wiki/Lab_color_space
#define LABF(t) ((t >= 8.85645167903563082e-3) ? powf(t,0.333333333333333) : (841.0/108.0)*(t) + (4.0/29.0))


float
rgbToCIEL(PIXEL p)
{
   float y;
   float r=p.r/255.0;
   float g=p.g/255.0;
   float b=p.b/255.0;

   r=inverseGamma(r);
   g=inverseGamma(g);
   b=inverseGamma(b);

   //Observer = 2°, Illuminant = D65 
   y = 0.2125862307855955516*r + 0.7151703037034108499*g + 0.07220049864333622685*b;

   // At this point we've done RGBtoXYZ now do XYZ to Lab

   // y /= WHITEPOINT_Y; The white point for y in D65 is 1.0

    y = LABF(y);

   /* This is the "normal conversion which produces values scaled to 100
    Lab.L = 116.0*y - 16.0;
   */
   return(1.16*y - 0.16); // return values for 0.0 >=L <=1.0
}

Comments

4

I did a comparison of methods recommended here, and these are the results:

TLDR:

1- First of all, none of them do magic. However, if you've recently found how useless those averaging or (min + max) / 2 implementations are, you're in the right place.

2- For serious work, the best options are either Rec. 709 or sRGB. They are essentially the same but with different factors and thresholds optimized for TVs or office monitors.

3- If you need a metric to simply determine the brightness of a color, use my formula:

Y = (3 * R + 10 * G + B) // 14

It's fast, simple, and provides results very close to sRGB and Rec. 709.

Comparison:
I generated random RGB colors and calculated the luminance using different metrics. I plotted each color as a pixel in an image, with the horizontal position corresponding to its luminance value and the vertical position displaying multiple colors for the same luminance value. This also shows how each method distributes luminance for random samples.

comparison of different luminance calculation metrics

When colors are sorted by their saturation before plotting the diagrams:

enter image description here

And when ordered by hue:

enter image description here

Note that all the diagrams above are plotted using the exact same colors. The only difference lies in how the colors are sorted before plotting.

Rec. 601 weights:
L = 0.299 * R + 0.587 * G + 0.114 * B

Rec. 601 wights (^2):
L = sqrt(0.299 * R^2 + 0.587 * G^2 + 0.114 * B^2)
This is called HSP. To use it, R, G, and B must first be scaled to the 0-1 range, and the result must then be rescaled back to the desired range.

sRGB / Rec. 709 weights:
L = 0.2126 * R + 0.7152 * G + 0.0722 * B

sRGB and Rec. 709:
I used Jive Dadson's sRGB implementation, and for Rec. 709 only replaced values. If you see any problems, please let me know.

My Formula:
L = (3 * R + 10 * G + B) // 14
Well, I don't have lab equipments and didn't dive too deep into the math behind sRGB and Rec. 709 to see what happens when the gamma encoding/decoding is bypassed. All I got is a probably uncalibrated monitor and a pair of eyes suffering from mild red-green blindness! However, what I see back here is that all those float calculations in sRGB and Rec. 709 just shifts highly saturated colors in darker areas, from incorrect places to slightly better incorrect places. If that much error is acceptable for you, you might consider making it even simpler.
My formula uses sRGB/Rec. 709 weights rounded to integers (0.2126/0.0722=2.9446 and 0.7152/0.0722=9.9058). This adds some more error but eliminates floating point operations and can be done in DWORDs. I actually wonder why nobody has mentioned it yet and allowed this to be mine! :D

Here is the code to produce those diagrams:

import matplotlib.pyplot as plt
from PIL import Image
import random
import numpy as np
from matplotlib.colors import rgb_to_hsv
import math

highresfactor = 1  ## I used 10, but it becomes very slow
width = 120 * highresfactor
height = 60 * highresfactor
np.random.seed(43)
colors = np.random.randint(0, 256, [round(width * height * 0.6), 3])

# colors = colors[
#     np.argsort(rgb_to_hsv(colors / 255.0)[:, 1])
# ]  # uncomment to order colors by saturation

# colors = colors[
#     np.argsort(1 - rgb_to_hsv(colors / 255.0)[:, 0])
# ]  # uncomment to order colors by hue


def generate_image(luminance_function):
    img = Image.new("RGB", (width, height))
    counts = [0] * width

    for i in range(colors.shape[0]):
        (r, g, b) = colors[i, :]
        lum = int(round(luminance_function(r, g, b) * (width - 1) / 255))
        if counts[lum] > height - 1:
            continue
        y = counts[lum]
        img.putpixel((lum, height - y - 1), (int(r), int(g), int(b)))
        counts[lum] += 1
    return img


def average_luminance(r, g, b):
    return (r + g + b) / 3


def minmax_luminance(r, g, b):
    return (max([r, g, b]) + min([r, g, b])) / 2


def div6__luminance(r, g, b):
    return (r + r + g + g + g + b) / 6


def rightshift_luminance(r, g, b):
    if highresfactor > 2:
        return (r + r + r + g + g + g + g + b) / 8
    return (r + r + r + g + g + g + g + b) >> 3


def rec601_luminance_no_lin(r, g, b):
    return 0.299 * r + 0.587 * g + 0.114 * b


def rec601_luminance_2(r, g, b):
    return (
        math.sqrt(
            0.299 * (r / 255) ** 2 + 0.587 * (g / 255) ** 2 + 0.114 * (b / 255) ** 2
        )
        * 255
    )


def rec709_luminance_no_lin(r, g, b):
    return 0.2126 * r + 0.7152 * g + 0.0722 * b


def rec709_luminance_int(r, g, b):
    if highresfactor > 2:
        return (3 * r + 10 * g + b) / 14
    return (3 * r + 10 * g + b) // 14


# Function to calculate sRGB luminance
def sRGB_luminance(r, g, b):
    # Inverse of sRGB "gamma" function
    def inv_gam_sRGB(c):
        c = c / 255.0
        return c / 12.92 if c <= 0.04045 else ((c + 0.055) / 1.055) ** 2.4

    # sRGB "gamma" function
    def gam_sRGB(v):
        return (
            v * 12.92 if v <= 0.0031308 else 1.055 * v ** (1.0 / 2.4) - 0.055
        ) * 255 + 0.5

    # GRAY VALUE ("brightness")
    def gray(r, g, b):
        return gam_sRGB(
            0.212655 * inv_gam_sRGB(r)
            + 0.715158 * inv_gam_sRGB(g)
            + 0.072187 * inv_gam_sRGB(b)
        )

    return gray(r, g, b)


# Function to calculate Rec. 709 luminance
def rec709_luminance(r, g, b):
    # Inverse of Rec. 709 "gamma" function
    def inv_gam_rec709(c):
        c = c / 255.0
        return c / 4.5 if c < 0.018 else ((c + 0.099) / 1.099) ** 2.2

    # Rec. 709 "gamma" function
    def gam_rec709(v):
        return (v * 4.5 if v < 0.018 else 1.099 * v ** (1.0 / 2.2) - 0.099) * 255 + 0.5

    # GRAY VALUE ("brightness")
    def gray_rec709(r, g, b):
        return gam_rec709(
            0.212655 * inv_gam_rec709(r)
            + 0.715158 * inv_gam_rec709(g)
            + 0.072187 * inv_gam_rec709(b)
        )

    return gray_rec709(r, g, b)


methods = [
    (average_luminance, "(R+G+B) / 3"),
    # (minmax_luminance, "(Min+Max) / 2"), # comment another to see this
    (rightshift_luminance, "(3R+4G+B) / 8 (or >> 3)"),
    (div6__luminance, "(2R+3G+B) / 6"),
    (rec601_luminance_no_lin, "Rec.601 weights"),
    (rec601_luminance_2, "Rec.601 weights (^2)"),
    (sRGB_luminance, "sRGB"),
    (rec709_luminance_int, "(3R+10G+B) // 14"),
    (rec709_luminance_no_lin, "sRGB / Rec. 709 weights"),
    (rec709_luminance, "Rec. 709"),
]

fig, axes = plt.subplots(3, 3, sharex=True, sharey=True)
axes = axes.flatten()
for i in range(len(methods)):
    im = generate_image(methods[i][0])
    axes[i].imshow(im)
    axes[i].text(
        0.975 * width,
        0.033 * height,
        str(methods[i][1]),
        color="white",
        va="top",
        ha="right",
        size=7,
        weight="bold",
        bbox=dict(facecolor="black", alpha=0.35),
    )
    axes[i].set_xticks([0, width / 2, width - 1])
    axes[i].set_xticklabels([0, 127, 255])
    axes[i].set_yticks([])


plt.tight_layout()
plt.show(block=True)

It also can do this:

enter image description here

2 Comments

Rounded values are fine, I’m sure they’re useful on some embedded device that can’t do floats. But a calibrated monitor is required equipment to do anything related to color perception. If you adjust formulas to get good results on an uncalibrated monitor, then you are effectively calibrating the monitor. The equations will not be useful on any other monitor.
@CrisLuengo Absolutely, calibrated monitors are a must for real color work, but mine is calibrated for coding and I'm fine with it. My point is, we should match our effort to the task. For example, what brought me here was to find a darker or brighter color for a solid background. I only need some metric that can distinguish 00FF00 from 0000FF. Simplifying isn't just about possibility, cutting down on that much floating point operations for such a simple task might be my small but mighty contribution to saving the planet :)
3

As mentioned by @Nils Pipenbrinck:

All these equations work kinda well in practice, but if you need to be very precise you have to [do some extra gamma stuff]. The luminance difference between ignoring gamma and doing proper gamma is up to 20% in the dark grays.

Here's a fully self-contained JavaScript function that does the "extra" stuff to get that extra accuracy. It's based on Jive Dadson's C++ answer to this same question.

// Returns greyscale "brightness" (0-1) of the given 0-255 RGB values
// Based on this C++ implementation: https://stackoverflow.com/a/13558570/11950764
function rgbBrightness(r, g, b) {
  let v = 0;
  v += 0.212655 * ((r/255) <= 0.04045 ? (r/255)/12.92 : Math.pow(((r/255)+0.055)/1.055, 2.4));
  v += 0.715158 * ((g/255) <= 0.04045 ? (g/255)/12.92 : Math.pow(((g/255)+0.055)/1.055, 2.4));
  v += 0.072187 * ((b/255) <= 0.04045 ? (b/255)/12.92 : Math.pow(((b/255)+0.055)/1.055, 2.4));
  return v <= 0.0031308 ? v*12.92 : 1.055 * Math.pow(v,1.0/2.4) - 0.055;
}

See Myndex's answer for a more accurate calculation.

Comments

3

A single line:

brightness = 0.2*r + 0.7*g + 0.1*b

When r,g,b values are between 0 to 255, then the brightness scale is also between 0 (black) to 255 (white).

It can be fine-tuned of course, but I've found it to be sufficient for most use case.

Comments

2

The answer from Myindex coded in Java:

public static double calculateRelativeLuminance(Color color)
{
    double red = color.getRed() / 255.0;
    double green = color.getGreen() / 255.0;
    double blue = color.getBlue() / 255.0;

    double r = (red <= 0.04045) ? red / 12.92 : Math.pow((red + 0.055) / 1.055, 2.4);
    double g = (green <= 0.04045) ? green / 12.92 : Math.pow((green + 0.055) / 1.055, 2.4);
    double b = (blue <= 0.04045) ? blue / 12.92 : Math.pow((blue + 0.055) / 1.055, 2.4);

    return 0.2126 * r + 0.7152 * g + 0.0722 * b;
}

I used it to calculate the contrast ratio of the background color and determine if the text color would be bright or not. Full example:

public static boolean isBright(Color backgroundColor)
{
    double backgroundLuminance = calculateRelativeLuminance(backgroundColor);
    double whiteContrastRatio = calculateContrastRatio(backgroundLuminance, 1.0);
    double blackContrastRatio = calculateContrastRatio(backgroundLuminance, 0.0);
    return whiteContrastRatio > blackContrastRatio;
}

public static double calculateRelativeLuminance(Color color)
{
    double red = color.getRed() / 255.0;
    double green = color.getGreen() / 255.0;
    double blue = color.getBlue() / 255.0;

    double r = (red <= 0.04045) ? red / 12.92 : Math.pow((red + 0.055) / 1.055, 2.4);
    double g = (green <= 0.04045) ? green / 12.92 : Math.pow((green + 0.055) / 1.055, 2.4);
    double b = (blue <= 0.04045) ? blue / 12.92 : Math.pow((blue + 0.055) / 1.055, 2.4);

    return 0.2126 * r + 0.7152 * g + 0.0722 * b;
}

public static double calculateContrastRatio(double backgroundLuminance, double textLuminance)
{
    var brightest = Math.max(backgroundLuminance, textLuminance);
    var darkest = Math.min(backgroundLuminance, textLuminance);
    return (brightest + 0.05) / (darkest + 0.05);
}

Comments

1

RGB Luminance value = 0.3 R + 0.59 G + 0.11 B

http://www.scantips.com/lumin.html

If you're looking for how close to white the color is you can use Euclidean Distance from (255, 255, 255)

I think RGB color space is perceptively non-uniform with respect to the L2 euclidian distance. Uniform spaces include CIE LAB and LUV.

Comments

1

The inverse-gamma formula by Jive Dadson needs to have the half-adjust removed when implemented in Javascript, i.e. the return from function gam_sRGB needs to be return int(v*255); not return int(v*255+.5); Half-adjust rounds up, and this can cause a value one too high on a R=G=B i.e. grey colour triad. Greyscale conversion on a R=G=B triad should produce a value equal to R; it's one proof that the formula is valid. See Nine Shades of Greyscale for the formula in action (without the half-adjust).

1 Comment

I did the experiment. In C++ it needs the +0.5, so I put it back in. I added a comment about translating to other languages.
1

I wonder how those rgb coefficients were determined. I did an experiment myself and I ended up with the following:

Y = 0.267 R + 0.642 G + 0.091 B

Close but but obviously different than the long established ITU coefficients. I wonder if those coefficients could be different for each and every observer, because we all may have a different amount of cones and rods on the retina in our eyes, and especially the ratio between the different types of cones may differ.

For reference:

ITU BT.709:

Y = 0.2126 R + 0.7152 G + 0.0722 B

ITU BT.601:

Y = 0.299 R + 0.587 G + 0.114 B

I did the test by quickly moving a small gray bar on a bright red, bright green and bright blue background, and adjusting the gray until it blended in just as much as possible. I also repeated that test with other shades. I repeated the test on different displays, even one with a fixed gamma factor of 3.0, but it all looks the same to me. More over, the ITU coefficients literally are wrong for my eyes.

And yes, I presumably have a normal color vision.

2 Comments

In your experiments did you linearize to remove the gamma component first? If you didn't that could explain your results. BUT ALSO, the coefficients are related to the CIE 1931 experiments and those are an average of 17 observers, so yes there is individual variance in results.
And to add: Also, the 1931 CIE values are based on a 2° observer, and in addition there are known errors in the blue region. The 10° observer values are significantly different as the S cones are not in the foveal central vision. In both cases, effort was made to avoid rod intrusion, keeping the luminance levels in the photopic area. If measurements are made in the mesopic region, rod intrusion will also skew results.
0

The HSV colorspace should do the trick, see the wikipedia article depending on the language you're working in you may get a library conversion .

H is hue which is a numerical value for the color (i.e. red, green...)

S is the saturation of the color, i.e. how 'intense' it is

V is the 'brightness' of the color.

3 Comments

Problem with the HSV color space is that you can have the same saturation and value, but different hue's, for blue and yellow. Yellow is much brighter than blue. Same goes for HSL.
hsv gives you the "brightness" of a color in a technical sense. in a perceptual brightness hsv really fails
HSV and HSL are not perceptually accurate (and it's not even close). They are useful for "controls" for adjusting relative color, but not for accurate prediction of perceptual lightness. Use L* from CIELAB for perceptual lightness.
0

I read through this interesting thread hoping to get something out of it...

But I feel that the thread's most prominent aspect is how much people; and subsequently companies and industries, tend to disagree on pretty much everything!

Not that I'm any different... I don’t even agree with the use of Rec.601 or ITU-RBT.709 grayscale standards!

As a software developer my countless experiments in image manipulation have convinced me that the ITU-RBT.2100 standard is superior to the above and is closely followed by my very own:

IMT standard:  R=0.2582  G=0.66  B=0.0818

However, I do include all 5 standards in my software which I call "the Grayscale Model", so that users can make their own choice before every operation where relevant.

Ah well... to each their own I guess! :)

Anyway, for those interested in offering their users an assortment of grayscale models (and I think we should) they're welcome to the function below.

//-----------------------------------------------------------------------------------------

function GetGrayScale(Model){
let R=0, G=0, B=0;
if(Model==0){R=0.3333; G=0.3333; B=0.3333;} else // RGB Average (see note below)
if(Model==1){R=0.2990; G=0.587; B=0.114;} else   // Rec.601
if(Model==2){R=0.2126; G=0.7152; B=0.0722;} else // ITU-RBT.709
if(Model==3){R=0.2627; G=0.678; B=0.0593;} else  // ITU-RBT.2100
if(Model==4){R=0.2582; G=0.66; B=0.0818;}        // IMT Standard
return [R,G,B];}

//-----------------------------------------------------------------------------------------

NB: Feel free to add the missing 0.0001 to any channel you please for all the difference RGB averaging will make!

Function usage:

var Gray=GetGrayScale(Model), R=Gray[0], G=Gray[1], B=Gray[2];

Basic Luminance for the non-scientific coder:

 L=(((D[i]*R)+(D[i+1]*G)+(D[i+2]*B))*100)/255; // 0..100%

Or...

 L=(D[i]*R)+(D[i+1]*G)+(D[i+2]*B); // 0..255

Over and out.

Comments

-1

The 'V' of HSV is probably what you're looking for. MATLAB has an rgb2hsv function and the previously cited wikipedia article is full of pseudocode. If an RGB2HSV conversion is not feasible, a less accurate model would be the grayscale version of the image.

Comments

-1

To determine the brightness of a color with R, I convert the RGB system color in HSV system color.

In my script, I use the HEX system code before for other reason, but you can start also with RGB system code with rgb2hsv {grDevices}. The documentation is here.

Here is this part of my code:

 sample <- c("#010101", "#303030", "#A6A4A4", "#020202", "#010100")
 hsvc <-rgb2hsv(col2rgb(sample)) # convert HEX to HSV
 value <- as.data.frame(hsvc) # create data.frame
 value <- value[3,] # extract the information of brightness
 order(value) # ordrer the color by brightness

Comments

-2

For clarity, the formulas that use a square root need to be

sqrt(coefficient * (colour_value^2))

not

sqrt((coefficient * colour_value))^2

The proof of this lies in the conversion of a R=G=B triad to greyscale R. That will only be true if you square the colour value, not the colour value times coefficient. See Nine Shades of Greyscale

2 Comments

there are parenthesis mistmatches
unless the coefficient you use is the square root of the correct coefficient.
-4

Please define brightness. If you're looking for how close to white the color is you can use Euclidean Distance from (255, 255, 255)

1 Comment

No, you can't use euclidian distance between sRGB values, sRGB is not a perceptually uniform Cartesian/vector space. If you want to use Euclidian distance as a measure of color difference, you need to at least convert to CIELAB, or better yet, use a CAM like CIECAM02.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.