Get our free email newsletter

Reaching for the Stars from the Lens of a Telescope

1308 F1 coverToday’s digital camera sensors have the benefit of acting like a bucket that collects photons (light) – the longer the exposure, the more light (color and detail) is captured and displayed in the photograph. Long exposures from these cameras can capture light far too faint for our unaided eyes to see. These long exposures require precise tracking of the sky. Stars are like pinpoints of light. Tracking errors quickly elongate stars in a photograph, and then they look oblong or like streaks. The equipment that is available today allows amateur astronomers the opportunity to take astrophotography images that rival the detail taken by professionals just a decade ago.

How It All Began

My interest in astronomy started when I was a young boy watching the Apollo program, hoping some day to become an astronaut. At around age 12, my parents bought me a small telescope for Christmas. I’d spend hours looking at the Moon and stars in the night time sky. Over the years my interest in astronomy took a back seat to work and other hobbies (skydiving, SCUBA, motorcycle riding …). All that changed when my wife, Lisa, and I built our second home in a small California community called Groveland in 2003. Groveland is located about 24 miles from the north gate to Yosemite National Park. Due to the decreased light pollution compared to the San Jose Bay Area, the night time sky was alive with stars. Shortly after our house warming party, my good friend, Tom Parker, presented me with a rather large box and attached note. The note read, “You need this”. Inside the box was a Celestron C5 telescope. This was not your average “department store” telescope, but rather an entry level telescope for serious amateurs. You see, Tom is an astrophotographer. He attaches cameras to telescopes and takes long-exposure photographs of deep-sky objects like galaxies and nebula. Armed with my new Celestron C5, Tom’s guidance, and help from the many Yahoo groups on the internet – I too was off to become an astrophotographer.

- Partner Content -

Why Capacitance? Benefits & Applications of Digital Capacitive Solutions

In this paper, readers will discover digital capacitive displacement measurement solutions not possible with conventional analog systems. The following applications address a wide range of industry sectors.

Unknown to me, my entrance into astrophotography coincided with the digital camera revolution. This made all the difference, since film is not intended for the long exposures that are required for deep-sky objects. Over the next couple years, I self-taught myself this new hobby and slowly upgraded my astronomy equipment arsenal to accommodate the demands of deep-sky astrophotography.


My Very First Astrophotographs

The very first photograph that I took with a telescope pointed at the night time sky was quite easy. I simply held a small digital “point and shoot” camera up to the eyepiece and took a photograph. This is a technique that can be used with any bright object in the sky. By “bright object”, I mean celestial objects that you can see with your eyes when you look up at the night time sky. This is in contrast to many of the “deep sky” photographs that I would later learn to take with very long exposures. Deep-sky objects include any celestial object that is too faint to see when you look through a telescope – like nebula and galaxies.

One of the most obvious bright objects in the night time sky is the Moon, which was the subject of my very first photograph. The “hold the camera up to the telescope eyepiece and take a photograph” technique doesn’t require any expensive telescope or camera equipment. All you do is look through the telescope to find something interesting in the night time sky, and then hold your camera up to the eyepiece and take a picture. However, you should take your camera out of its “automatic” mode to get the best photographs. Use a shutter speed that is faster or an aperture that is smaller than the camera recommends when you take a photograph of the Moon (under-exposed), otherwise the picture will just look like a bright white (over-exposed) object.

I usually take many photographs with different exposure times and then keep just the ones that I like best. Printed with this article is the first photograph that I took with a telescope (Photo 1). This photograph of the Moon appears almost exactly as it appeared to my eye when I viewed it through the telescope, and reveals the many details of the cratered lunar surface.

- From Our Sponsors -

My First Moon

Photo 1: My first photo of the Moon

Several years after I took this first photograph of the Moon, I was able to perfect a technique called “prime-focus” imaging, where the camera is connected to the telescope and uses the telescope as the lens. Using this technique, I revisited the Moon to take several images of a lunar eclipse.

A lunar eclipse occurs when the Moon passes through some portion of the Earth’s shadow. As you might expect, this can occur only when the Sun, Earth, and Moon, are aligned exactly, or very closely so, with the Earth in the middle. The Moon does not completely disappear, even during a total lunar eclipse, because of the refraction of sunlight by the Earth’s atmosphere. The Moon can appear various shades of yellow, orange, and red, because any sunlight that does reach the Moon must pass through a long and dense layer of the Earth’s atmosphere, where the light is scattered. Shorter wavelengths are more likely to be scattered by the small particles, so by the time the light has passed through the atmosphere, the longer wavelengths dominate. This resulting light we perceive as red. This is the same effect that causes sunsets and sunrises to turn the sky a reddish color. The amount of refracted light depends on the amount of dust or clouds in the atmosphere.

This photograph (Photo 2) is a series of 14 separate images of the Moon taken during a 4-hour period, depicting various times/phases of the eclipse. The exposure times vary between 1/250 and 2 seconds. The time-line begins at the top of the photograph (12 o’clock position), with a full Moon, and moves clockwise. As you circle clockwise around this photograph you can see Earth’s shadow begin to cover the view of the Moon. At first glance, this may appear to be the normal phases of the Moon that you view in the sky each month, except these views of the Moon are taking place over a 4-hour period instead of the 29.5-day cycle that it takes for the Moon to go from “full moon” to “new moon”, and back to “full moon”. At the bottom of this composite photograph (6 o’clock position), the moon was completely in the Earth’s shadow, creating a total lunar eclipse.

Lunar-Eclipse

Photo 2: Composite image showing time-lapse photographs of a lunar eclipse

Unlike solar eclipses, lunar eclipses are completely safe to watch. You don’t need any kind of protective filters. It isn’t even necessary to use a telescope.


Imaging the Large Planets

The next step in my astrophotography learning curve was imaging large planets. The two largest planets in our Solar System are Jupiter and Saturn. Although you can use the “hold the camera up to the telescope eyepiece and take a photograph” technique that I described earlier, to get a more detailed image you need to correct for the turbulence created by looking through Earth’s atmosphere. Planets, even the largest like Jupiter and Saturn, appear like pin-point stars from Earth. In fact, often what people believe are very bright stars are really planets in the night time sky. Remember the nursery rhyme “Star light, star bright, first star I see tonight, I wish I may, I wish I might, have the wish I wish tonight” – well that star you may have wished upon was most likely a planet. In fact, that star was probably the planet Venus, Saturn, or Jupiter. To see a planet appear larger than a star, you need a telescope with very high magnification. This very high magnification, often 200x or more, distorts the image and magnifies atmospheric turbulence. One way to create a detailed image and minimize distortion is to take many photographs and stack (digitally place on top of one another) the sharpest images. An added challenge to this technique is that the planets are rotating. Over several minutes, a planet could rotate enough that “stacking images” would create a blurry final image because the planets change in rotational position.

The most efficient way to take many high magnification images over a short period of time is with the use of a webcam that has been modified to replace the eyepiece of a telescope. The advantage of shooting video with a webcam, instead of still images with a camera, is that you can capture many (often 10 to 30) frames-per-second. Because our atmosphere (which is the air that you look through when you view with a telescope) is constantly changing, brief periods of exceptionally calm conditions can be captured with this webcam video technique. Then, using computer software, the best (most sharp) frames from the video can be stacked to form still images that rival the best images taken from professional telescopes and camera equipment only a decade ago.

I used this technique to image the planet Saturn. Saturn is the sixth planet from the Sun, and the second largest planet in the Solar System after Jupiter. Along with the planets Jupiter, Uranus, and Neptune, it is classified as a gas giant planet, which means it is comprised mostly of hydrogen. It is believed that a small core of rock and ice lies at the center of the planet. In diameter, Saturn is almost 10 times the size of Earth. In Roman mythology, Saturn is the god of agriculture and is the root of the English word “Saturday”.

Saturn’s system of rings, consisting mostly of ice particles with a smaller amount of rocky debris and dust, are visible with a small telescope. These rings are huge, they would cover two-thirds the distance from the Earth to the Moon. Saturn’s atmosphere consists of bands of clouds, similar to the planet Jupiter, that can be seen on a clear night with a reasonable (6-inch aperture or more) size telescope. One interesting thing about Saturn is that it is less dense than water – this means it would float on top of a very large body of water. Saturn is great to observe because its rings make it so recognizable – it is probably one of the most photographed planets in our Solar System.

The image shown here is an “aligned and stacked” image from the best 67 of 445 frames of video (Photo 3). Very little other post-processing was done to this image, other than some sharpening. Several of Saturn’s atmospheric cloud bands are clearly visible as well as, of course, those magnificent rings.

The Planet Saturn

Photo 3: Aligned and stacked image of Saturn

This is an image I took of the planet Jupiter (Photo 4). Jupiter is the fifth planet from the Sun and is the largest one in our Solar System. It is also the fourth brightest object in the sky (after the Sun, the Moon and Venus). It has been known since prehistoric times as a bright “wandering star”. If Jupiter were hollow, more than one thousand Earths could fit inside. It also contains more matter than all of the other planets combined. Colorful latitudinal bands, which are atmospheric clouds and storms, illustrate Jupiter’s dynamic weather systems. The wind speed in the upper atmosphere of Jupiter is believed to exceed 400 MPH.

The Planet Jupiter

Photo 4: Aligned and stacked image of Jupiter

Jupiter is just about as large in diameter as a gas planet can be. If more material were to be added, it would be compressed by gravity such that the overall radius would increase only slightly. A star can be larger only because of its internal (nuclear) heat source which pushes out from the center of the star in a tug-of-war against gravity. To become a star, Jupiter would need at least 80 times more mass than it currently has.

The image shown here is an “aligned and stacked” image from the best 256 of 607 frames of video. Many of Jupiter’s atmospheric bands are clearly visible. I always enjoy looking at Jupiter through my telescope and sharing this giant “gas” planet with others because the atmospheric bands are so easy to see and they change from night to night. Another great feature of observing Jupiter is that on most nights at least a couple of its moons are easy to spot orbiting the planet. One of Jupiter’s moons (the moon “Io”) can be seen in this image – it looks like a small speck at about the 9 o’clock position relative to Jupiter.


Imaging Deep-Sky Objects

Graduating from imaging “bright objects” (like the Moon and large planets) to imaging “deep-sky objects” (like nebula and galaxies) is like the difference between going for a walk and running a marathon. Deep-sky objects cannot be seen without the aid of a telescope, and even then they are often so faint that the human eye lacks the sensitivity to see what can be revealed from a long-exposure photograph. The exposure time for most daytime photography is measured in fractions of a second. The long exposure that I use to image deep-sky objects is measured in minutes and hours. Long-exposure imaging of faint deep-sky objects present several challenges, four of which are:

  • a camera that can image faint objects over long periods of time,
  • determining the camera’s focus when the object being imaged is too faint for the human eye to see,
  • aiming the telescope/camera at objects that are too faint for the human eye to see,
  • accurately tracking the sky while the camera is imaging.

Let’s discuss how we tackle each of these challenges.


The Camera

As I mentioned at the beginning of this article, my entrance into astrophotography coincided with the digital camera revolution. This is important because film is not intended for long exposures. With the normal exposure times used for film, the intensity of the light and the duration of the exposure determine the brightness of the photograph. In simple terms, the relationship between the aperture and shutter speed is predictable. At very low light levels and long duration times, an effect known as reciprocity failure occurs. This is when increasing the exposure time does not result in the exposure that is expected. Reciprocity failure has a large impact on film-based astrophotography. To add to this challenge, the spectrum of light emitted from many deep sky objects are outside of the sensitivity curves of most film. These film-based issues do not exist when using the sensors in digital photography. Today’s digital camera sensors have the benefit of acting like a bucket that collects photons (light) – the longer the exposure, the more light (color and details) that’s captured and displayed in the photograph. The relationship between the aperture and shutter speed is predictable over long periods of time. On the other hand, digital imaging sensors do have their own challenge called dark current. Dark current is the relatively small electric current that flows through an imaging sensor even when no photons are striking the sensor. This appears as noise in the image. However, this dark current and other noise in the image can be minimized by cooling the sensor and by using a technique called dark frame calibration, which will be described later in this article.

There are several companies that make specialized cameras designed specifically for astrophotography. The one I use is made by a company called Quantum Scientific Imaging (QSI). My QSI camera uses an 8.3 mega pixel CCD image sensor. The sensor in the camera can be cooled to 45°C below the ambient temperature to reduce the noise in long exposures. The camera attaches directly to the telescope where the eyepiece is normally located. In this way, the telescope becomes a very big lens for the camera.

Focusing the camera can be a difficult process because the deep-sky image being photographed cannot be seen until after a long exposure, typically at least a minute, is taken. Achieving focus is accomplished by turning the focus knob on the telescope to move the camera whichever way is appropriate to achieve focus. Because the deep-sky image cannot be seen, I focus the camera on a nearby star that is usually in the frame of the deep-sky object being imaged. I take an image of a star, and use a technique that measures the width of the star by determining how many pixels on the imaging sensor are being illuminated at half the peak value of the light (photons) from the star. This is called measuring the “full width at half maximum” (FWHM) of the star. The lower the value, the better the focus. Think about starlight illuminating an array of imaging pixels; when the least number of pixels are illuminated, maximum focus is achieved. I have automated this process using a commercially available product called RoboFocus which is made by a company called Technical Innovations. RoboFocus uses a microprocessor controlled stepping motor that attaches to the telescope focus knob. Using software and a focusing algorithm, the FWHM of the star is measured and focus is adjusted until the optimal focus is achieved.


The Telescope Mount

Now would be a good time to introduce one of the most critical pieces of equipment for astrophotography – the telescope mount. The telescope mount is at the core of the imaging configuration. Taking photographs of deep-sky objects requires a telescope mount that can accurately track the sky over long periods of time and very long exposure time with a camera. It is used to provide a stable base to mount the optical tube (the actual telescope), point the optical tube at the object to be viewed, and accurately track the sky while the object is being imaged.

First let me discuss the importance of providing a stable base by way of the telescope mount. The mount must be able to support the payload of the optical tube (in my case there are several of them), as well as the camera, and accurately track the sky. I chose a German equatorial-type mount called a Paramount ME which is made by a company called Software Bisque. The Paramount ME has a payload capacity of 150 lbs (68 kg) and can accurately track the sky with an accuracy of several arc-seconds. To understand what this means, I need to explain image scale. Picture the entire dome of the night time sky as the face of a clock. The clock is divided into hours, minutes, and seconds. Much like this clock example, the celestial dome that makes up the night time sky above you is divided into degrees and each degree is comprised of arc-minutes and arc-seconds. The best case ideal scenario, assuming a flat surface and no obstructions, would be a 180 degree view from horizon to horizon. There are 60 arc minutes in each degree, and each arc minute is made up of 60 arc seconds. With this in mind, celestial objects in the night time sky can be referred to as having a specific size as measured in arc minutes or arc-seconds. To get a sense of just how small a slice of the sky an arc-second represents, take a U.S. quarter, hold it on edge so you are looking at its width against the sky, then move that quarter 3 miles away – the width of a U.S. quarter, as seen from 3 miles away, is one arc-second! As an example, a full moon covers approximately 0.5 degrees of sky, which is 30 arc-minutes or 1,800 arc-seconds. Another example would be a double star system, where two stars are gravitationally bound to each other. A good example is Polaris, the North Star. The main bright star, Polaris A, is separated from small faint star, Polaris B, by 18” (eighteen arc seconds). Image scale is the size of an image on the imaging sensor – usually measured in arc-seconds. This explains the importance of accurately tracking the sky as stars, which appear as pin-points and typically have an image scale of a few pixels, will quickly become oblong and then streak if there are guiding errors. This becomes less of an issue at lower magnifications or wider fields of view, both of which would have a higher image scale.

Without going into a complicated discussion of polar alignment, let’s just say that parts of the telescope must be pointed precisely at Earth’s North Celestial Pole. The Earth rotates around its poles (north and south) making one revolution each day – approximately 24 hours. This rotation can be seen by noting the change in position of celestial objects in the sky. Most notable during the day is our Sun, and in the night our Moon. The stars and other celestial objects move across the sky in much the same way as the sun and moon. For people in the northern hemisphere, there is one point in the sky that doesn’t appear to move. It doesn’t appear to move because this is the North Celestial Pole, the point in the sky around which all the stars seen from the northern hemisphere rotate. The North Star, also called Polaris, is located almost exactly at this point in the sky. If you go out at night and find the North Star you will notice that it doesn’t move during the course of the night, while all the other stars do move; they rotate counter-clockwise around the North Star (from east to west – think about the sun rising in the east and setting in the west). It’s similar to spinning a basketball at the end of your finger; the point where your finger contacts the basketball is almost stationary as the rest of the ball rotates around it. This alignment of the telescope with the North Celestial Pole is critical to taking long exposure photographs of the sky.

Once the telescope system is polar aligned, I then align the mount to its geographic location. This is done by pointing the telescope to several known stars in the sky. The mount has a database of celestial objects and can “learn” its exact location by referencing this database with the position of the stars. Now the mount can point the telescope to any object in the sky that I want to image using a software program called The Sky to aim the telescope, by way of controlling the mount, to the position in the sky that I want to photograph. Once the program has aimed the telescope, I take some short (about 2 to 5 minute) exposures to confirm that it’s pointed in the right place and to frame the object in the center of the photograph. It’s important to remember that many of the celestial objects that I photograph are so faint that you can’t see them visually when you look through a telescope, so this process ensures that I am imaging the object of interest. Once confirmed, I can increase the exposure times to reveal more detail.

The Observatory

For several years I did my astrophotography from the driveway of our home in Groveland, CA. My imaging session would begin just before sunset when I rolled out the telescope equipment. There are many pieces to the setup that I use – the telescope mount, the imaging telescope, the guiding telescope, the camera, adapters, and all the other electronic equipment needed to ensure precise tracking of the sky. Once I roll out and set up the nearly 200 pounds of equipment and the sky is dark enough to see some stars, I begin the process of aligning the telescope system to the sky. The alignment of the telescope with the sky is critical to taking long exposure photographs of deep-sky objects. Once the telescope system is aligned, I need to teach the mount its geographic location. This is done by pointing the telescope to several known stars in the sky. This entire “setup” process and focusing of the camera takes about 2 ½ hours (Photo 5). Breaking down all the equipment at the end of my imaging session, typically about 4 AM the following morning, takes about 1 hour. Assuming that I can’t leave the telescope setup outside for several nights, this is a 3 ½ hour process each and every time that I image the sky.

Driveway setup with captions

Photo 5: Basic telescope and camera setup in driveway

With a permanent setup, an observatory, this 3 ½ hour setup/breakdown is unnecessary. The telescope is attached and aligned on a permanent pier. The entire setup process with an observatory typically takes less than 30 minutes and there is no breakdown at the end of the evening – you simply close the observatory dome.

Lisa and I began looking for a location to build an observatory in 2006. The site where our home is built in Groveland is too close to the golf course and the lights from our neighbors. We decided that the right observatory site for us would include a balance of dark skies, meaning minimal light pollution from neighboring sites, and proximity to our existing home in Groveland – ideally no more than a 30 minute car ride. After about a year of checking dozens of potential building sites, we purchased a 10 acre plot of land. It has a 360 degree panoramic view of the sky, minimal light-pollution, and is less than 5 miles from our existing home in Groveland. As an added bonus, the daytime views are spectacular – which led us to modify our plans for an observatory, to include a small studio-type house.

To read more about the construction of our observatory visit the In Compliance website at http://www.incompliancemag.com/pavlu_observatory.

The Telescopes

The main telescope, referred to as the imaging scope, attaches to the mount. The imaging scope is where I attach the imaging camera. Because of the precise tracking requirements, I also use another telescope that is attached to the imaging scope. This telescope is called a guide scope. I use the guide scope to correct any tracking errors by focusing on a star, called a guide star. By placing a guide star in the cross-hairs of a special eyepiece, small adjustments can be made to the mount tracking. This is a very tedious process as these small adjustments are made several times a minute while the photographic exposure is being completed. I can guide manually by making the adjustments myself, or I can use an additional camera attached to a computer to auto-guide the telescope mount by sending electronic signals that make the small corrections needed to ensure precise tracking of the sky.

Printed with this article is a photograph of my astrophotography setup (Photo 6). The (red) telescope mount (Paramount ME), which moves the telescope and tracks whatever I happen to be photographing, is attached to the pier that penetrates our observatory floor. Only the very top of the pier can be seen in this photograph. There are two main imaging telescopes, a 3” and a 6” refractor. In this photograph, the image and guide cameras are attached to the larger 6” (white) refractor. I also have a 10” (blue) reflector telescope attached to the mount that is used as a visual telescope. This visual telescope has an eyepiece that allows observing without disturbing the imaging camera setup.

Observatory-TelescopeSetup

Photo 6: Detail of mounted telescope and camera


The Power of Exposure Time

Once it’s dark enough and the camera is focused, it’s time to take some photographs. I take many separate photographs with exposure times as short as 15 seconds and as long as 30 minutes. Then, all the individual photographs are aligned and digitally “stacked” one on top of another to create the final image. It is this long exposure alignment and stacking technique that creates an image with the level of detail visible in the deep-sky images accompanying this article. Depending on how faint the object is that I’m photographing, the total exposure time of the images that I stack have been over 16 hours. In this case, I will take photographs over several evenings to create the final image. Processing all the individual images to create the final image typically takes 60 to 80 hours.

People are often surprised when they don’t see images like those I’ve shared with this article when they look through a telescope. While few experiences can replace the “wow” factor of seeing the planet Saturn’s rings or the Orion Nebula through a telescope for the first time, the human eye is simply no match for the light gathering ability of a digital camera. Now don’t get me wrong, I’m not suggesting that we replace all our telescope eye pieces with electronic displays. I’m simply stating that each technique has its merits and drawbacks.

The human eye contains two types of photoreceptors, rods and cones. The rods are more numerous (some 120 million) and are more sensitive than the cones. However, rods are not sensitive to color. This is why you typically see shades of gray, instead of color, in low light conditions. Try it yourself. Take a colorful picture outside at night and see how little color information is detectable. Today’s digital camera sensors don’t have these low-light color restrictions. You can think of a digital camera sensor as a bucket that collects photons (light) – the longer the exposure, the more light (color and details) that’s captured and displayed in the final photograph.

One night sky feature popular with both beginning and experienced stargazers is Pleiades, also known as the Seven Sisters for the seven stars that can be seen with excellent eyesight. The Pleiades are a prominent sight during the winter as viewed in the northern hemisphere and to the naked eye appears as a small cluster of stars.
Pleiades is about 400 light-years from Earth. If you have less than excellent eyesight, Pleiades may look more like a small fuzzy patch, about the size of a dime at the end of your fully extended arm. In addition to the stars, Pleiades contains a reflection nebula. Reflection nebulae are clouds of dust which are reflecting the light of a nearby star or stars, similar to how the headlights on a car illuminate fog. Thus, the color shown by reflection nebulae is similar to that of the illuminating stars.

I’ve included three images of Pleiades here (Photo 7). The first image illustrates what you would expect to see when viewing Pleiades through a telescope. The most visible feature is the stars, as little, if any, of the reflection nebulosity can be seen. The second image illustrates an image of Pleiades after 110 minutes of exposure. Eleven individual images, 10 minutes of exposure each, easily reveal the nebulosity. The third and final image reveals Pleiades in all its glory. Fifty-two individual images, 10 minutes of exposure time each, taken over 3 separate evenings. These 520 minutes of exposure reveal the complex nature and color of Pleiades with all its reflection nebulosity.

M45 x 3 Text

Photo 7: Illustration of various exposure times

These three images of the same beautiful deep sky object show vastly different detail. At one end of the spectrum, the telescope view is the real-time “being one with the universe” personal experience of the light that left Pleiades nearly 400 years ago. At the other end of the spectrum is the image created from nearly 8 ½ hours of exposure time, with all the beauty and color revealed thanks to the power of exposure time.


Process Flow

Photographing deep-sky objects is all about exposure time. As I mentioned, I take many separate photographs with exposure times as short as 15 seconds and as long as 30 minutes. Then, all the individual photographs are aligned and digitally “stacked” one on top of another to create the final image. In order to do this effectively, I follow a strict process when creating a final image. Let’s assume that I’ve captured images of a deep-sky object over several nights. The first thing I need to do is calibrate these individual images. Calibrating the images removes the noise associated with the digital imaging sensor and the optical imaging path. I perform three types of calibrations on each photograph:

1. flat-frame calibration
2. bias-frame calibration
3. dark-frame calibration.

Flat-frame calibration is used to minimize imperfections in the optical path. In the case of my photographs, the optical begins when the light enters the telescope and ends when the light hits the imaging chip in the camera. Flat-frame calibration is performed by taking a photograph of an evenly illuminated neutral surface so foreign objects, like dust specks, can be digitally subtracted. This calibration also compensates for any vignetting, which is when edges of the photograph are darker than the center of the photograph.

Bias-frame calibration is used to minimize an offset (bias) when a pixel is read from a digital camera. This is caused by the readout noise that is produced by the electronics that are reading the pixel values. The bias for a particular camera is generally constant over a long period of exposure time. A bias frame is a short-length exposure with the shutter closed. This bias frame will have slightly different pixel values but, except for a small amount of noise, the value for any one pixel will be consistent from image to image. Since the bias is consistent from image to image, it can be subtracted from the deep-sky images.

Dark-frame calibration is used to minimize the noise that accumulates in camera sensors during a long exposure. This noise increases with exposure time and temperature and has a random component. This means that several dark frames must be taken at the same temperature and exposure times as the deep-sky images, except that dark frames are taken with the light path blocked – basically they are taken with the lens cap on. These dark frames are averaged and subtracted from each of the deep-sky images.

Once each of the images is calibrated, they need to be aligned. Since each image will be digitally stacked one on top of another, they must be perfectly aligned so pixels will add appropriately. Image alignment is performed by choosing several features in a reference image, usually stars, so all the other images can be scaled and rotated to match the exact positioning in the reference image. Then, after calibration and alignment, the images can be digitally stacked to bring out details that are not visible in each individual image.


Photography & Engineers

Given my background, I have many friends that are engineers. Quite a few of them are compliance engineers. I’ve noticed that many of them have photography as a hobby … and I use the term “hobby” lightly, as many of them would be considered professionals except that they don’t make a living from their photography. My friend Tom, mentioned at the beginning of this article, is a compliance engineer and an astro and landscape photographer. My friend Russell is an electrical engineer and a SCUBA (underwater) photographer. My dad, an engineer, shoots nature photography. My friends, Barry, Michael and Gaylon, are both engineers and photographers.

This convergence of engineering and photography is not accident. I think it happens because certain types of photography offers a good balance of technical challenge and creativity. We’re all familiar with the distinction between left-brain dominant (logical) and right-brain dominant (creative) people. Let’s define casual photography as using a point-and-shoot camera in an automatic setting mode to take photographs that look “pretty”, and technical photography as using a DSLR in a manual setting mode to take complex photographs – like the ones explained in this article. Using this explanation, I believe technical photography uses both the left and right side of the brain. I believe most engineers tend to be left-brain (logical) dominant, so technical photography helps to exercise the right side of our brain without being so overwhelmingly creative that it’s too foreign to relate to. Whatever it is, I strongly suspect many amateur photographers who take technical photographs also have a technical (e.g. engineer) background. For me personally, there are few other hobbies where I get to use my technical skills to make something that looks “pretty”. Hope you have enjoyed the results. Live long and prosper.

For more from Eddie Pavlu…

Shooting for the Stars with a DIY Observatory

Excerpts from Eddie’s Deep-Sky Photo Album

 

author pavlu-eddie Eddie Pavlu
was most recently Vice President of Operations at National Technical Systems. Prior to that he was President and CEO of Elliott Laboratories. He has a Bachelors and Masters degree in Electrical Engineering, and has been in executive management for the past 16 years. He is a senior member of the IEEE and a member of the EMC Society. Outside of business, he is an amateur astronomer and astrophotographer, with photographs published in several publications, including Astronomy Magazine.

 

 

Related Articles

Digital Sponsors

Become a Sponsor

Discover new products, review technical whitepapers, read the latest compliance news, trending engineering news, and weekly recall alerts.

Get our email updates

What's New

- From Our Sponsors -

Sign up for the In Compliance Email Newsletter

Discover new products, review technical whitepapers, read the latest compliance news, trending engineering news, and weekly recall alerts.