49
$\begingroup$

I know this question sounds dumb, but please bear with me. This question came into my mind while I was looking at the photos in an astronomy book. How is it possible that IR and UV photos of stars and nebulae are taken if our eyes could not detect them?

$\endgroup$
16
  • 23
    $\begingroup$ "I know this question sounds dumb" not at all! If anything, the question demonstrates an above-average level of intelligence: a lot of people wouldn't even think to ask... $\endgroup$
    – Aaron F
    Commented Feb 2, 2020 at 20:30
  • 7
    $\begingroup$ I've deleted some comments that didn't seem to be targeted at improving the question. $\endgroup$
    – David Z
    Commented Feb 3, 2020 at 3:01
  • $\begingroup$ A nice book is Alien Vision by Austin Richards. amazon.com/Alien-Vision-Exploring-Electromagnetic-Technology/dp/…. Already old, but nicely showing what you can see outside the visible spectrum. $\endgroup$ Commented Feb 4, 2020 at 10:03
  • 2
    $\begingroup$ Have you ever heard someone play a song on a piano, but an octave higher or lower? It's like that, but with a different kind of waves. $\endgroup$ Commented Feb 4, 2020 at 15:26
  • $\begingroup$ It's not really an answer to the question but it might interest you that some animals including human women are tetrachromats able to see in the UV range: bbc.com/future/article/… $\endgroup$
    – JimmyJames
    Commented Feb 4, 2020 at 18:11

8 Answers 8

84
$\begingroup$

The images are taken by UV/IR cameras. But the frequencies are mapped down/up to visible region using some scheme. If you want to preserve the ratios of frequencies you do a linear scaling. Generally the scaling is done in way to strike balance between aesthetics and informativity.

In light of the overwhelming attention to this question, I have decided to add the recent image of a black hole as taken by the Event Horizon Telescope. This image was captured in radio waves by array of radio telescopes in eight different places on earth. And the data was combined and represented in the following way.

enter image description here

A point that I forgot to mention which was pointed out by @thegreatemu in the comments below is that the black hole image data was all collected at just one radio wavelength (of $1.3$mm). The colour in this image signifies the intensity of the radio wave. Brighter implying brighter signal.

$\endgroup$
8
  • 71
    $\begingroup$ Maybe an analogy to music will make this more understandable to a layman: I'm not a musician, but I remember a teacher of mine playing songs one octave lower, than what the songs usually were played in. To a species not able to hear the higher octave this would be a representation of the orignal song, just as the picture provided is a representation of the original data. Both representations essentially contain the same information. $\endgroup$
    – user224659
    Commented Feb 3, 2020 at 5:50
  • 4
    $\begingroup$ Note that sight and sound are very different senses. In sound we are very sensitive to spectrum, but have only minimal ability to resolve spacially. In sight we are very sensitive to spacial variation, but the spectrum is crushed down into trichromatic color. $\endgroup$ Commented Feb 3, 2020 at 16:36
  • 23
    $\begingroup$ While they are different senses, the analogy preserves. $\endgroup$ Commented Feb 3, 2020 at 18:28
  • 2
    $\begingroup$ This is good info, but misleading in most cases. Usually the frequency information is NOT preserved, as is the case in the example figure. The colors presented are a so-called heat map, where the INTENSITY at that pixel is mapped onto a color. "Hotter" pixels (i.e. whiter) received more radio averaged over the entire spectrum. Occasionally you will see different frequency bands mapped onto RGB intensities separately, but this is rare. $\endgroup$ Commented Feb 3, 2020 at 22:01
  • 2
    $\begingroup$ In every article I read on that topic, it seemed the journalist was under the impression that it was an actual optical photograph of some sort rather than a visualisation. You can visualise many things as a pseudo-photograph: cellphone radio coverage, or something not electromagnetic at all like noise or pollution levels. $\endgroup$
    – Rich
    Commented Feb 4, 2020 at 3:51
45
$\begingroup$

when you are looking at a UV or IR photo, the intensities of these (invisible) rays are represented by different (visible) colors and brightnesses in the photo. This technique of rendering things our eyes cannot see into images that we can see is common, and the images thus prepared are called false color images.

$\endgroup$
5
  • 1
    $\begingroup$ TL;DR: every answer: 'UV/IR cameras take greyscale images and the images thus prepared are called false color images. The frequencies are mapped down/up to visible region using some scheme.' $\endgroup$
    – Mazura
    Commented Feb 3, 2020 at 1:28
  • 13
    $\begingroup$ @Mazura not necessary greyscale, UV/IR sensors can capture light intensity at multiple wavelengths in the UV or IR spectrum which can then be artificially shifted to the visible spectrum. $\endgroup$
    – zakinster
    Commented Feb 3, 2020 at 10:03
  • $\begingroup$ @zakinster in principle yes, but in practice this is pretty much never done. Your monitor has only three colors, so at best you can map the average intensity over 3 different bands to RGB $\endgroup$ Commented Feb 3, 2020 at 22:02
  • $\begingroup$ Why stop there? You can map any kind of wave frequency to colors, which means you can see (a picture representation of) sounds, barometric pressure WiFi waves etc.. $\endgroup$
    – refaelio
    Commented Feb 4, 2020 at 9:59
  • $\begingroup$ @thegreatemu Actually we often make RGB images using more than one non-visible color. One example is seen in Hayes et al. (2013), where Hα, far-UV, and Lyα (which is also in the UV) is mapped to R, G, and B, respectively. $\endgroup$
    – pela
    Commented Feb 4, 2020 at 20:48
45
$\begingroup$

Because you can build a camera that can.

The sensitivity of a camera is not determined by human eyes, but by the construction of the camera's sensor. Given that in most common applications we want the camera to capture something that mimics what our eyes see, we generally build them to likewise be sensitive to approximately the same frequencies of light that our eyes are sensitive to.

However, there's nothing to prevent you from building a camera that is sensitive to a different frequency spectrum, and so we do. Not only ultraviolet, but infrared, X-rays, and more are all possible targets.

If you're asking why the pictures are visible, well that's because we need to see them with our eyes, so we paint them using visible pigments or display them on displays that emit visible light. However, this doesn't make the pictures "wrong" - at the basic level, both visible light and UV/IR pictures taken by modern cameras are the same thing: long strings of binary bits, not "colors". They take interpretation to make them useful to us.

Typically, UV/IR cameras take greyscale images, because there are no sensible "colors" to assign the different frequencies - or better, "color" is just a made up thing that comes from our brains, and is not a property of light. So coloring all "invisible" lights grey is not "wrong" any more than anything else - and it's easier to build the sensors (which means "cheaper"), because the way you make a color-discriminating camera is effectively the same as the way your eyes are made: you have sub-pixelar elements that are sensitive to different frequency ranges.

$\endgroup$
10
  • 10
    $\begingroup$ You can even build "cameras" that don't use electromagnetic radiation at all, for instance ultrasound or electron miroscopes. $\endgroup$
    – jamesqf
    Commented Feb 2, 2020 at 18:07
  • 1
    $\begingroup$ Many ordinary cameras can detect some infrared, and they typically display it as a pink or purpleish color. To see this, point your phone camera at the end of a TV remote control. $\endgroup$ Commented Feb 2, 2020 at 19:04
  • 2
    $\begingroup$ @Jeanne Pindar: Most digital cameras (including those in phones) have a filter that blocks a lot of the infrared, though. It's possible to remove those filters and take interesting infrared photos: a web search for "digital infrared conversion" will return lots of hits. It can be done for UV, too, though it's more complicated. $\endgroup$
    – jamesqf
    Commented Feb 2, 2020 at 23:44
  • $\begingroup$ @Jeanne Pindar : Yes, indeed. So you can use your digital camera - though you have to modify it by removing and replacing the filter, because they include one to keep that from "contaminating" the visible picture - to film from about 750-1100 nm (most sensitive at 750, least at 1100), however there will be (I believe, at least) no color discrimination by frequency/wavelength. $\endgroup$ Commented Feb 3, 2020 at 4:28
  • $\begingroup$ @jamesqf The "phone camera" trick worked well with early phone cameras; the idea persisted even though it doesn't really work anymore :D $\endgroup$
    – Luaan
    Commented Feb 3, 2020 at 9:09
12
$\begingroup$

Think of shining an intense infrared beam onto wood. The wood is scorched. You can see the scorching with visible light, even though the infrared beam is not visible. Do it again, but put some metal in the way to block part of the beam. Now you can see the shadow of the metal.

This is much like how Xrays film shows bones.

Camera are the similar. When sensors in the camera are stimulated by UV/IR/Xrays, they produce an electrical signal. These signals are stored as pixels in an image. You can display the image on a monitor, and choose to make the pixels be whatever color you like.

$\endgroup$
1
  • 4
    $\begingroup$ Don't actually do this experiment in case you accidentally reflect the infrared beam into your eyes with the metal. $\endgroup$
    – user20574
    Commented Feb 3, 2020 at 11:10
8
$\begingroup$

Imagine writing a program that listens to sounds through your computer's microphone and paints the screen a different color for every different note (or frequency). Suddenly you can point it at somebody singing and "see" the sound. You can even show it to a deaf person, and they can have an idea of what kind of song they're watching through the colors they see. None of their senses can capture sound, yet they're seeing it on the screen, because the thing that's capturing the sound is the microphone.

An UV camera is pretty much the same. A sensor captures some light that your eyes can't, and a program paints the screen a different color for every UV frequency you can't see.

$\endgroup$
7
$\begingroup$

A camera does not store light. The light you see when you look at a photo is not the same as the light that was captured when the photo was taken.

When light enters a digital camera it triggers electrical changes in the image sensor, which are converted to digital data by an ADC. In a film camera the light instead causes chemical changes in the film emulsion which are retained until the film is developed.

Humans are basically trichromats. We have three different types of "cones" in our eyes with different responses to light. So we can get away with representing color with three numbers per pixel in a digital imaging system or three layers in a chemical film.

Some time later, we reconstruct an image for a person to view. In a simplistic digital camera we would take the red, green and blue values for each pixel and use them to light the red green and blue pixels on our display. In reality there is usually some adjustment involved, because the filters in the camera don't precisely represent the human eye and because the frequency bands overlap, so it is not possible to find "primary colors" that trigger only one cone.

Regular digital cameras are designed to approximate our eyes, because that is what most people want. But there is no fundamental reason why cameras have to be that way. As long as we can build a lens to focus the rays and a sensor that will respond to them we can capture an image.

It's possible to build a camera to work with multiple wavebands at the same time, and this is how regular cameras work, but it's not a great choice for scientific imaging for a few reasons. Firstly the "bayer filter" has to be basically printed onto the sensor, meaning it can't be changed. Secondly it means that your pixels for different wavebands have slightly different spacial locations.

So for obscure wavebands a more common solution is to capture one waveband at a time, the images can then be combined into a single multi-channel image after capture. Or only one monochrome image may be captured, it all depends on the goal of the imaging.

Of course we humans can still only see visible light and we can only see that trichromatically, so at some point the creator of an image has to make a judgement call on how to map the scientific image data (which may have an arbitrary number of channels) to a RGB image (which has exactly 3 channels) for display. Note that color in the final image does not nessacerally imply that there were multiple channels in the original image data, it is not uncommon to use a mapping process that maps a single channel input to a color output.

$\endgroup$
1
  • $\begingroup$ Also, with multiband (e.g., RGB) imaging chips, the sensitivity is lower because the individual sensors (of the triad at each pixel) are smaller, and resolution is lower because the pixel is spread out over three subsensors. A gray-scale (monochromatic) imager is going to be more sensitive to low light and have better resolution, but if you put filters in front to just pass one color at a time, you miss out on changes over time (only one band at a time can be recorded). $\endgroup$
    – Phil Perry
    Commented Feb 5, 2020 at 17:21
1
$\begingroup$

Let's follow on with your logic. You're positing that nothing can happen that our bodies can't do. OK:

  1. How can a car drive 200+ miles an hour if we can't run that fast?

  2. How can a plane fly if we can't?

  3. How can a submarine spend weeks underwater if we can't stay underwater (and alive :-) that long?

The answer is that the machines we build can do things that we cannot do without their aid. We build these machines to expand our capabilities.

Cameras are a machine which can do things, such as respond to radiation in the ultraviolet and infrared range of the spectrum, which our eyes cannot.

$\endgroup$
2
  • 3
    $\begingroup$ while this answers the question at a philosophical level, I doubt it is the kind of answer that would be helpful to the person asking it here. $\endgroup$
    – jwenting
    Commented Feb 5, 2020 at 6:17
  • $\begingroup$ @jwenting : On the other hand, it's exactly the right answer. $\endgroup$
    – WillO
    Commented Mar 1, 2021 at 15:17
0
$\begingroup$

Answer to the question: How is it possible there are UV photos while our eyes cannot detect UV waves? We are so made, mainly by water, and water has "optical window" absoption coefficient, that means huge one at the ends of window(infra red and ultra violet) and low at green wavelength. So this is reason for our sensitivity for green.

$\endgroup$
1
  • 2
    $\begingroup$ It's not clear how this answers the OP's question. $\endgroup$
    – G_B
    Commented Feb 4, 2020 at 23:34

Not the answer you're looking for? Browse other questions tagged or ask your own question.