Taking pictures of churches
1.Churches don’t move, but there might be lots of visitors that get in the way.
2.Churches are big.
3.There isn’t often much space between the church and the photographer, when taking pictures from the outside.
4.The interior of a church is a very high contrast scene in general.
5.Stained glass is high and has high contrast.
6.Weather and lighting changes.
7.Some interesting details are far away, dark and placed so that the view angles are uncomfortable.
8.Sometimes there are obstructions in front of the object you want to take a picture of.
9.Need to get permission to take pictures.
a)Both detailed shots and general shots are required.
b)Need to get pictures with little distortion and proportions are respected
c)Views from as many angles as possible.
d)Need as much detail as is possible.
e)Require as much as possible to be visible.
f)Need to make it easy to identify the details for later classification.
Be aware of the sacred places: don’t shoot there where people are praying.
Be aware of times of masses: don’t shoot then.
Avoid a flash as much as possible: it creates unnatural lighting and reflections that you will rarely want. A flash is also rather obnoxious to people passing by. Some people don’t know this, but a flash is worse than useless when taking pictures of stained glass.
You will need to take situational shots which are complementary to high quality shots. The former make it possible to see where detailed shots belong. You won’t need a tripod for these. Situational shots can even be replaced by videos than pan all over the church. Store these pictures apart.
Lack of sharpness is mostly due to camera movement.
Apart from some general (wide angle) outdoor shots you need a tripod. Not only a tripod makes it possible to have the sharpest pictures possible but also it makes it possible to take several shots from the same angle: this makes it possible to remove people from pictures and take multiple exposure photography. In door shots require long exposures: don’t count on high ISO to obviate the need for a tripod: high ISO reduces the dynamic range of the sensor and reduces sharpness.
Generally speaking the heavier the tripod the better, but some materials are better than others. Carbon fiber lets you get away with lighter tripods.
To compensate for a flimsy tripod you can use shutter delay to take a picture a few seconds after taking the picture.
A serious tripod allows you to change the tripod head.
A problem with many tripod heads is that the camera is screwed directly on top of them and then the camera will swivel around that screw if you attempt a vertical framed shot.
The best solution I know of are specially designed adapters for cameras that can be used with Arca-Swiss compatible heads. There are lots of manufactures of this.
Raising the tripod’s vertical column decreases stability a lot!
These require lots of stability and strength: generally a very heavy tripod and the camera should be somehow connected to the tripod with an extension arm while the lens rests on the camera head.
Pictures should be taken with the mirror raised a few seconds before exposure if using a DLSR. You should use a cable release.
When framing vertically the telephoto lens and camera swivel around the axis if the mirror moves: need to raise the mirror about 10 seconds before the shot!
It used to be the case that you could fire a camera with a cable release really made of steel,. This prevented a lot of camera shake. Now it’s all electronic. Some cameras can be fired with an infrared device. If you don’t have one, the next best thing as we said is to use a delay exposure that is available on nearly all cameras.
Your lens or camera may have a stabilizer. Use it for handheld shots.
When using a tripod turn the image stabilizer off: prolongs the life of the stabilizer and often makes pictures sharper anyway.
To take a picture at a difficult angle use a camera with a swivel screen or use an angle viewer over the eyepiece. This makes vertical shots easy to accomplish. Use a tripod of course.
Most sensors have a “native” ISO speed, typically 100 or 200 ISO used to be called ASA. At this speed a camera has the greatest dynamic range: the difference between the brightest objects and darkest objects in which detail can be seen. The very best sensors have a dynamic range of 13, which means that the brightest details are two to the power 13 times brighter than the darkest details. It used to be around 8 when digital cameras came out. Pocket digital cameras have lower dynamic range. For more detail see the site dxomark.com.
You can find lower ISO speed adjustments on my camera, the grain is lower than at 100 ISO but the dynamic range is reduced.
Many cameras allow you to shoot in raw or jpeg mode, and then some. What is raw? It’s a proprietary format that varies with the camera that encodes the data from the sensor before any processing is done to convert it to JPEG. Software on your computer might not be able to let you visualize the raw files. You can find lots of software that converts raw to jpeg, so what’s the point? The point is that a lot of adjustments can be made in raw a whole lot better than if we work on JPEG.
One of the most well known adjustments is adjusting the “color temperature”. We all know that pictures taken in incandescent lighting end up looking more yellowish than those that are taken in bright sunlight. That’s because light from bulbs contains proportionally more red and yellow light than light from sunlight. Almost no ultraviolet comes out of incandescent light bulbs. The sun’s surface is a lot hotter than a filament in a bulb and the light ends up being less yellow. In fact a law of physics says that the distribution of frequencies coming from a hot body depends on the temperature of the body (Boltzmand’s law) and that the peak frequency is proportional to the absolute temperature of the body (Wien’s displacement law). A bulb’s filament is typically at 3300 degrees Kelvin whereas the sun is at 5500 degrees Kelvin. So sunlight is said to have a temperature of 5500 degrees Kelvin. I’ll let you guess the temperature of light coming from a bulb.
Not all light sources are hot bodies and the frequency distributions of these other sources do not necessarily behave according to Boltzman’s law. Witness the green look of pictures taken in fluorescent lighting for example.
A white flash typically approximates a hot body of 6500 degrees.
Most cameras try to compensate for the average color temperature so that neutral grey objects appear neutral grey. They don’t always do it very well.
For maximal sharpness one generally advises to take pictures in raw. Software that has a lot of time on its hands can make a sharper JPEG than the one that is made in-camera.
By the time raw has been converted to JPEG it has lost a lot of its dynamic range. If you have under exposed in raw you have a much better chance of compensating for this than if you work in JPEG. In raw there are typically 12 to 14 bits per color for each pixel, whereas there are 8 bits per color per pixel in JPEG. With more bits you can code more range.
When you create a PEG file from a raw file a lot of information is thrown away and that includes transitions between smoothly varying parts of your picture (like the sky). There is a compression of information which implies that JPEG files are a lot smaller than RAW files. The compression ratio is variable, but there is no agreed upon standard as to what a quality of 95 means. It generally goes from 0 to 100. It would not advise using a quality under 60, but please try it out, it’s very instructive.
Incidentally whenever you edit a JPEG file you throw away some more information when you save it again. This is why professionals are reluctant to perform a series of modificiations on a file of the JPEG format. There are other formats that involve no detail loss: TIFF, BMP for example. You could write a whole book on the subject.
As we said you can convert Raw to jpeg in camera or in a computer. The advantage of the former is that the raw is often thrown away once it’s converted. The quality of raw converters varies and this generates a lot of discussion on forums. Where they differ most is at high ISO’s. A sensor can function at a high ISO because the signal from the sensor is amplified before it gets stored and converted to digital form (this is a gross simplification). Heat inside the sensor moves electrons around randomly and some of this produces what is called “noise”, which is sometimes called “grain”, but it’s not quite the same thing as with film. It gets worse with high amplification, ie high ISO. Raw converters try to reduce this noise with all sorts of tricks and they don’t all do this as well. The algorithms involved are generally secret.
It’s often said that the raw converter of company X is best for company X’s raw files- X being Nikon, Canon or whatever. This is not quite true any longer as there are independent raw converters that are up at the top of the league (DXO, Adobe,…). Removing noise generally involves loss of details (it’s a kind of smoothing process). This is one reason why high ISO pictures are not as sharp as low ISO pictures.
If you shoot the same scene with exposures of different durations you will capture shadow detail in some pictures (the long exposures) more than those of short exposures. In the latter you will see more detail in the “highlights” which tend to be “burned out”. Back in the early days of digital photography, say 2004 if you took a picture of somebody’s face in sunlight there was a great chance of having the part in the sun “burned out” as completely white. Wedding dresses had no detail in digital pictures and color film was far superior in this respect.
The range of light Intensities inside a church is enormous. This is where multiple exposures with a tripod can come in handy. The same scene is for example taken with 3 exposures: for example 2 seconds, half a second and 8 seconds. The resulting pictures are combined, hopefully nothing has moved inside the scene.
Then you combine the pictures with software. Some cameras can do this inside the camera.
Taking pictures with varying apertures and combining the results looks very strange, so people play on the exposure times instead.
There are lots of programs that let you combine these exposures and one of the best known and probably the earliest on the market is called Photomatix, and it was written by Geraldine Joffre.
One of its strengths is that it can adjust pictures that are a little misaligned. This means you can take multiple exposures without a tripod (but it’s best to use a tripod).
There are lots of parameters available when creating a so-called High Dynamic Range picture, experimentation is generally the best guide. Pictures can look artificial, but that can be the price to pay for enormous dynamic range.
Incidentally do you know the dynamic range of a printed picture? Well there is about a 100 to 1 ration between the blackest part of the page and the brightest. Not much is i?. So printing involves compression.
A computer screen can have a contrast of 4000 for example (some special screens have a contrast of 60,000). This explains why some landscapes may look better on a screen than on paper (apart from the obvious lack of fine detail).
It generally goes without saying what wide angle and telephoto lenses do. Some people confuse zooms and telephoto lenses because zooms often have a telephoto position. Here’s a quick rundown of the various choices of lenses:
In general zooms are more practical than fixed focal length lenses, because they avoid your changing the lens in a dusty place, and the total weight and cost of the equipment is lower. However unless the zoom is particularly expensive (and we are usually talking about 2000 dollars or more) a zoom lens is not going to be as sharp as a fixed focal length. What’s more zooms generally have high distortion, especially at the wide end. This distortion can be corrected with software such as DXO, Photoshop, Picture Window and PtLens. We’ll come back to these applications later.
Image distortion is really something you want to avoid when taking pictures of architecture: it doesn’t look good to make a building look as if it has curved walls. The lenses with the most distortion and widest angle are called fisheye lenses. Despite the huge distortion people generally forgive the lens for this and pictures with such lenses can be useful because of the extreme wide angle. This is particularly useful in indoor shots, when you want to see how different parts of the church are related to others. It can also be useful for situational shots.
There is one kind of distortion that plagues many pictures of churches and has nothing to do with the curvature of the walls. This is perspective distortion which occurs when you point your wide angle upwards at a building. Look at the following photo taken with a 24mm lens:
Verneuil sur Avre: Eglise Ste Madeleine
In the above picture it looks as if the tower is the leaning tower of Pisa.
The following picture is more pleasing
Not only is the second picture more pleasing but the proportions of the various floors of the tower are respected.
How did I get that second picture?
I actually used a special wide angle lens called a shift lens, but before I go into an explanation here are some other possible methods:
A) I could have taken a picture with an extreme wide angle lens pointing the camera horizontally then, I would have removed the bottom part of the picture.
B) I could have taken the first picture and used software to straighten the picture.
Now I don’t happen to have a lens that is so wide as to be to use trick A. Such lenses are extremely rare.
As to trick B it isn’t as easy as it sounds. For one thing the top of the tower is going to get rather blurry as you stretch it out. Another problem is that you might not get the vertical proportions correct.
You may notice that the second picture is not quite perfect, this is because I did it in a hurry, hand-holding the camera.
Ok so what is a shift lens? It’s an extremely wide angle lens with a huge image to the back of the lens that can be moved (shifted) up and down or sideways. To use it properly you point the lens horizontally and you do the shifting so that the camera’s sensor covers the part of the picture (the church) you are are interested in.
It’s not ideal. The best thing would have been to have the camera and lens stay fixed but let the sensor move inside the picture: selecting the part of the picture that is desired. Obviously this is not terribly practical so that moving the lens instead of the sensor was chosen.
Let’s correct a common misunderstanding: there are no shift lenses that are long telephoto lenses. Designing such a lens would be very difficult, if not impossible, because the image to the back of the lens would be required to be sharp over a very wide region (unless of course you decided to limit the angle to a specific narrow range, which would require you to have a collection of telephoto shift lenses, one for each angle range.
For more detail see the Wikipedia article on perspective control.
The bad news is that shift lenses typically cost more than $2000 and don’t exist for all interchangeable lens cameras.
There is lots of software that allows you to stitch several pictures into one high resolution picture. My favorite is autopano, but there are free alternatives, such as ICE by the Microsoft research labs which is free.
When assembling a picture of a church please think of the distortion that image stitching can bring, especially if you are shooting a wide angle.
Rectlinear projection is probably what you need rather than spherical projection. It’s very much like choosing between a normal wide angle lens and a fisheye.
It’s important to realize that stitching requires that camera is rotated around a point inside the lens called a nodal point. What’s the issue here? The problem is that two objects that are aligned behind the lens in one shot need to stay aligned if you rotate the camera. Obviously if you rotate and move the camera by a foot the objects will not stay aligned. The closer the rotation point to a “nodal point” the better. It helps to have a special tripod head called a panorama head to do this, especially if there are nearby objects in your picture. If everything is beyond a mile of the camera it doesn’t really matter very much, but this wont be the case inside a church.
There is a video on youtube which explains how to find the camera’s nodal point.
You can use a shift lens to get an extreme wide angle lens effect: Take two pictures with shifts in opposite directions (but don’t move or rotate the camera) and then stitch the two pictures together. Do not use standard stitching software unless you fool the software that the lens you used was a very long lens. Why is this? The camera always records the focal length into the image file. This is part of the so-called EXIF data. The focal length is useful because it determines the viewing angle of a single shot and so it is used by the stitching software to determine how much adjustment is needed to correct for angle change when assembling pictures. Now two pictures taken from a shift lens involve no rotation at all, so the stitcher is completely bamboozled if it is told that the picture is made with a wide angle lens. Telling it that the focal length is say 2 meters tells it that there is no distortion required to glue the two pictures together.
You might have a dark statue in front of a flat surface like a stained glass window or a fresco. How do get rid of that?
Well, it’s tricky, but possible and will amaze the parish priest: no miracle required.
I use Picture Window, but this may be possible in Photoshop
I take several shots from different angles from a distance with a telephoto lens. The goal is to have a picture of each part of the surface by finding an appropriate angle.
I then remove all distortion from the pictures with say Dxo optics pro.
Then I paint the sculpture part of the image black inside my photo editor.
Then I use a tool called image registration: this allows you to make a composite picture from several pictures while telling the software this point in picture A corresponds to that point in picture B and so on. I choose 4 points in each picture to allow for the projective transformations that are necessary when the point of view changes. I also use a setting that says that if two pixels are supposed to be the same point in the surface then choose the lighter picture. This is why I paint the sculpture black: it will make it disappear.
I have never seen this technique described anywhere.