Friday, 31 October 2008

Using Reflected-Light Meters

Once you have set the proper film or camera speed or sensitivity (this is characterised by a numerical value followed alphabets ‘ISO’. To further understand ‘The Photographic Process and Film Sensitivity’ students may visit Perry Sprawls at his website) by on your camera or meter, you are ready to make the exposure-meter reading. With a reflected-light meter (in camera or handheld), point the camera or meter at the subject. The meter will measure the average brightness of the light reflected from the various parts of the scene. With an in-camera meter, a needle or diode display in the viewfinder or an LCD display on top of the camera will tell you when you have achieved the proper combination of lens and shutter-speed settings. If the camera is fully manual, you will have to set both the aperture and shutter speed. Automatic cameras may set both shutter speed and aperture; or they may set just one of the controls, leaving you to set the other.

If you're using a handheld meter, read the information on your meter and set the camera controls accordingly. An overall exposure reading taken from the camera position will give good results for and average scene with an even distribution of light and dark areas. For many subjects, then, exposure-meter operation is mostly mechanical; all you do is point the meter (or camera) at the scene and set the aperture and shutter speed as indicated. But your meter does not know if you need a fast shutter speed to stop action or a small aperture to extend depth of field. You will have to select the appropriate aperture and shutter combination for the effect you want. There will be other situations where either the lighting conditions or the reflective properties of the subject will require you to make additional judgements about the exposure information the meter provides, and you may have to adjust the camera controls accordingly.

A reflected-light meter reading is influenced by both how much light there is in the scene and how reflective the subject is. The meter will indicate less exposure for a subject that reflects little light, even if the two subject are in the same scene and in the same light. Because reflected-light meters are designed to make all subjects appear average in brightness, the brightness equivalent to medium gray, they suggest camera settings that will overexpose (make too light) very dark subjects and underexpose (make too dark) very light subjects.

Although reflected-light meters are influenced more by the largest areas of the scene, the results will be acceptable even when the main subject fills the picture but it's still of average reflectance (neither very light nor very dark). However, what happens if a relatively small subject is set against a large dark or light background? The meter will indicate a setting accurate for the large area, not for the smaller, but important, main subject. Therefore, when the area from which you take a reflected-light reading is very light or very dark, and you want to expose it properly, you should modify the meter's exposure recommendation as follows:
• For light subjects, increase exposure by 1/2 to 1 stop from the meter reading.
• For dark subjects, decrease exposure by 1/2 to 1 stop from the meter reading.

Please remember that since reflected metering reads the intensity of light reflecting off of the subject, they are easily fooled by variances in tonality, colour, contrast, background brightness, surface textures and shape. What you see is often not at all what you get. Reflected meters do a good job of reading the amount of light bouncing off of a subject the trouble is they don't take into account any other factors in the scene. They are merciless in recording all things as a medium tone. Reflected measurements of any single tone area, for instance, will result in a neutral grey rendition of that object. Subjects that appears lighter than grey will reflect excess light and cause them to record darker than they appear. Subjects that are darker than grey will reflect less light and result in an exposure that renders it lighter.

Selective Meter Readings

To determine the correct exposure for higher contrast scenes with large areas that are much darker or much lighter than the principle subject, take a selective meter reading of only the subject itself. How do you do this? Move the meter or camera close to the subject. Exclude unimportant dark or light areas that will give misleading readings. In making close-up readings, also be careful not to measure your own shadow or the meter's shadow.

Selective meter readings are useful for dark subjects against a bright background like snow or light sand, or for subjects in shade against a bright sunlit background. There is also the reverse of this: The subject is in bright sun and the background is in deep shade. In all these situations, your camera has no way of knowing which part of the scene is the most important and requires the most accurate exposure, so you must move in close so the meter will read only the key subject area. For example, if you want to photograph a skier posed on a snowy slope on a bright, sunny day, taking an average reading of the overall scene will result in underexposure. The very bright snow will overly influence the meter and the reading will be too high. The solution is to take a close-up reading from the skier's face (or a piece of medium-toned clothing) and then step back the desired distance to shoot the picture. Some cameras with built-in meters have an exposure-hold button or switch to lock the exposure setting when you do this. This technique is useful anytime the surroundings are much brighter or darker than your subjects.

Landscapes and other scenes with large areas of open sky can also fool the meter (See picture on the left, originally posted to Flickr as Rays of sunlight by Spiralz). The sky is usually much brighter than other parts of the scene, so an unadjusted meter reading will indicate too little exposure for the darker parts of the picture. One way to adjust for this bias without having to move in close is to tilt your lens or meter down to exclude the sky while taking your meter reading. The sky will probably end up slightly overexposed, but the alternative would be to find a different shooting position excluding most or all of the sky. There are also graduated neutral density floaters that work well in such situations. A neutral density filter absorbs all colours of visible light evenly, and you can position a graduated filter so that the darker portion is at the top of the image where it will darken the sky without affecting the ground below. Incidentally, some built-in meters are bottom-weighted to automatically compensate for situations like this, so check your manual.

Bright backlighting with the subject in silhouette can also present a challenge. With the light shining directly into the lens or meter, aiming the meter into the light can cause too high a reading. If you don't want to underexpose the subject, take a close-up reading, being especially careful to shade the lens or meter so that no extraneous light influences the reading.

Substitute Readings

What if you can't walk up to your subject to take a meter reading? For instance, suppose that you're trying to photograph a deer in sunlight at the edge of a wood. If the background is dark, a meter reading of the overall scene will give you an incorrect exposure for the deer. Obviously, if you try to take a close-up reading of the deer, you're going to lose your subject before you ever get the picture. One answer is to make a substitute reading off the palm of your hand, providing that your hand is illuminated by the same light as your subject, then use a lens opening 1 stop larger than the meter indicates. For example, if the reading off your hand is f/16, open up one stop to f/11 to get the correct exposure. The exposure increase is necessary because the meter overreacts to the brightness of your palm which is about twice as bright as an average subject. When you take the reading, be sure that the lighting on your palm is the same as on the subject. Don't shade your palm.

Another subject from which you can take more accurate and more consistent meter readings is a KODAK Grey Card, sold by photo dealers. These sturdy cards are manufactured specifically for photographic use. They are neutral grey on one side and white on the other. The grey side reflects 18% of the light falling on it (similar to that of an average scene), and the white side reflects 90%. You can use a gray card for both black-and-white and colour balance. Complete instructions are included in the package with the cards.

Handling High Contrast

How do you determine the correct exposure for a high-contrast scene, one that has both large light and dark areas? If the highlights of shadow areas are more important, take a close-up reading of the important area to set the exposure. With colour slide film, keep in mind that you will get more acceptable results if you bias the exposure for the highlights, losing the detail in the shadows. In a slide, the lack of detail in the shadows is not as distracting as overexposed highlights that project as washed-out colour and bright spots on the screen. If you are working with black-and-white film, you can adjust the development for better reproduction of the scene contrast, particularly in highlights.

But what if the very light and very dark areas are the same size and they are equally important to the scene? One solution is to take selective meter readings from each of the areas and use a f-number that is midway between the two indicated readings. For instance, if your meter indicates an exposure of 1/125 second at f/22 for brightest area and 1/125 second at f/2.8 for the darkest area--a range of six stops--set your camera 1/125 second f/8. This is a compromise solution, but sometimes it is your only choice short of coming back another day or changing your viewpoint, and the composition of the picture, to eliminate the contrast problem.

Using Spot Meters

Perhaps the best solution when you need a selective meter reading is offered by the spot meter. Handheld averaging meters generally cover about 30º, while handheld spot meters typically read a 1º angle The angle of spot meters built into the camera are usually wider, about 3 to 12º. The biggest advantage of a spot meter is that is allows you to measure the brightness of small areas in a scene form the camera position without walking in to make a close-up reading. Since a spot meter measures only the specific area you point it at, the reading is not influenced by large light or dark surroundings. This makes a spot meter especially useful when the principal subject is a relatively small part of the overall scene and the background is either much lighter or darker than the subject. Spot meters are also helpful for determining the scene brightness range. See picure on the left by Joseph Dickerson (the image has all rights reseved: Photos © 2004, Joseph A. Dickerson)

A spot meter can take more time to use since it usually requires more than one reading of the scene. This is particularly true when the scene includes many different bright or dark areas. To determine the best exposure in such a situation, use the same technique described previously for high-contrast subjects: Select the exposure halfway between the reading for the lightest important area in the scene and that for the darkest important area in the scene. Bear in mind, though, all films have inherent limits on the range of contrast they can accurately record. Remember too, you can sometimes create more dramatic pictures by intentionally exposing for one small area, such as a bright spot of sunlight on a mountain peak, and letting the dark areas fall into black shadow without detail. Spot meters are ideal for such creative applications. See picture on the left for creative control of light and careful planning of exposure after taking multiple spot meter readings by Dave Johnson. Also, note the subject and its environment... use of a spot meter usually is convenient in such situations.

Using Incident-Light Meters

a. Set ISO/ASA of film being used.
b. Hold light meter in front of scene with the sphere pointed at the camera.
c. Depress centre button.
d. Needle will move to a reading.
e. The reading is measured on the foot-candle scale.

Depending on the lighting conditions, there are two settings that can be utilized - the Red Arrow setting (when the High Slide is inserted in the slot below the sphere) is used outdoors in bright light and the Black Arrow setting (High Slide is removed) is used in lower light circumstances.

f. Move dial to Black Arrow setting when High Slide is not used so the number lines up with the corresponding number on scale.


g. Move dial to Red Arrow setting when High Slide is used so the number lines up with the corresponding number on scale.
h. Shutter speed scale
i. Aperture scale

Please note that above image and text has been taken from I have tried not to tamper with this text nor the image. In fact you may want to visit the above website.

Moving on, to use an incident-light meter, hold it at or near the subject and aim the meter's light-sensitive cell back toward the camera. The meter reads the amount of light illuminating the subject, not light reflected from the subject, so the meter ignores the subject and background characteristics. As with a reflected reading, an incident reading provides exposure information for rendering average subjects correctly, making incident readings most accurate when the subject is not extremely bright or dark.

When taking an incident-light reading, be sure you measure the light illuminating the side of the subject you want to photograph, and be careful that your shadow isn't falling on the meter. If the meter isn't actually at the subject, you can get a workable reading by holding the meter in the same kind of light the subject is in. Because the meter is aimed toward the camera and away from the background light, an incident reading is helpful with backlit subjects. This is also the case when the main subject is small and surrounded by a dominant background that is either much lighter or darker.

The exposure determined by an incident-light meter should be the same as reading a gray card with a reflected-light meter. Fortunately, many scenes have average reflectance with an even mix of light and dark areas, so the exposure indicated is good for many picture-taking situations. However, if the main subject is very light or very dark, and you want to record detail in this area, you must modify the meter's exposure recommendations as follows:
• For light subjects, decrease exposure by 1/2 to 1 stop from the meter reading.
• For dark subjects, increase exposure by 1/2 to 1 stop from the meter reading.

You will notice that these adjustments are just the opposite from those required for a reflected-light meter. An incident meter does not work well when photographing light sources because it cannot meter light directly. In such situations you will be better off using a reflected-light meter or an exposure table.

If the scene is unevenly illuminated and you want the best overall exposure, make incident-light readings in the brightest and darkest areas that are important to your picture. Aim the meter in the direction of the camera position for each reading. Set the exposure by splitting the difference between the two extremes.

Actual measuring

Foot Candle meters are the most commonly used meter for video. These meters display the amount of light striking them on a scale calibrated in foot candles, from 0 to 500 and are not dependent on any other factors.

In order to get an accurate reading the meter needs to be placed immediately in front of the subject facing the light source to be measured. The easiest way to take the key, fill, and back light measurements is to do them one at a time, with the other two lights turned off. Start with just the key light on, position the meter in front of the area of the subject struck by the key light. Aim the meter at the key light and note the number of foot candles. For example, if it reads 100 foot candles. Then next, turn on just the fill light and take another reading facing the fill light. If our intended lighting ratio is 2:1, then the fill light should read about 50 foot candles.

Now, turn on just the back light and take its reading. The back light should be somewhere between 50 and 150 foot candles, depending on the effect desired. The final step is to re-measure the key, fill, and back light positions with all the lights on. This is important since where the illumination from the lights overlaps the intensity increases. Adjust the intensity of the lights as needed to maintain the desired lighting ratio.

Understanding ISO / ASA

Use a meter reading as a guideline rather than a dictate for correct exposure. This makes it important that you understand how your particular meter works so you can consistently get good results no matter what the lighting. The place to begin this understanding is the instruction manual that came with your meter or camera. The instructions should familiarize you with the meter's specific features, its flexibility, and its limitations. Most camera and exposure meter instructions provide the basic techniques of light measurement and mention some of the situations that may "fool" the meter. If you can't find the instructions, write to the manufacture for them.

It may be appropriate to understand one standard before actually handling light meters. This standard often referred to as ASA or more correctly as ISO. International Standard ISO 5800:1987 from the International Organization for Standardization (ISO) defines both an arithmetic scale and a logarithmic scale for measuring color-negative film speed. Related standards ISO 6:1993 and ISO 2240:2003 define scales for speeds of black-and-white negative film and color reversal film.

In the ISO arithmetic scale, which corresponds to the older ASA scale, doubling the speed of a film (that is, halving the amount of light that is necessary to expose the film) implies doubling the numeric value that designates the film speed. In the ISO logarithmic scale, which corresponds to the older DIN scale, doubling the speed of a film implies adding 3° to the numeric value that designates the film speed. For example, a film rated ISO 200/24° is twice as sensitive as a film rated ISO 100/21°. Commonly, the logarithmic speed is omitted, and only the arithmetic speed is given (e.g., “ISO 100”). In such cases, the quoted “ISO” speed is essentially the same as the older “ASA” speed.

ISO or ASA (American Standards Association) in the most basic terms is the speed with which your film or digital camera responds to light, so the higher the ISO/ASA rating the more sensitive the film or CCD/CMOS sensor is to light.

In terms of film those with lower sensitivity (lower ISO speed rating like 50 or 100) requires a longer exposure and is thus called a slow film, while stock with higher sensitivity (higher ISO speed rating such as 400 or 800) can shoot the same scene with a shorter exposure and is called a fast film.

The same holds true for digital camera, but you are adjusting the sensitivity of the CCD or CMOS not actually using different film, one of the advantages of using digital format, you can change ISO setting for every second shot without having to physically change film stock!

The basic rule would be a higher ISO gives a higher shutter speed with the same Aperture settings, so less blur. The trade-off is that higher ISO also gives more noise or grain to your images, which can be a bad thing if it’s not a look you appreciate.

Slow shutter speed will give you pictures like the one on the right:

This is so because camera was not held steady during the exposure.


Some what similar problem confronts us when exposure may not permit faster shutter speed to arrest motion of the subject! See the picture below to understand the problem:


ISO is the term generally used on Digital Cameras, the standard was ASA and in the later years ISO.

To get a bit more technical it was known as the ISO linear scale, which corresponds to the older ASA scale, doubling the speed of a film (that is, halving the amount of light that is necessary to expose the film) implies doubling the numeric value that designates the film speed so 50, 100, 200, 400, 800 and 1600. My experience with Nikon, their digital ISO ratings tend to be exactly the same as the real film counterparts where the same cannot be said about other manufacturers.

Monday, 20 October 2008

Introduction to Montage


It means cutting together or assembling, it is based on the principal that is the sum of parts is a whole.

The original meaning is only the first part of the visual statement, according to montage theory. It's open and -- incomplete. What is missing in the static world of images? You! What montage does -- the thought (action) in evolution with the next shot "throws the meaning" on the previous shot! (In primitive terms we call it a reaction shot). The second shot in its turn is incomplete also -- it asks for another shot! That's how we crave for continuity and can't take our eyes away from the screen! Well, montage theory doesn't look so simple anymore.

The great formula of montage:
1 + 1 > 2
(Following the logic of dialects (thesis, anti-thesis and synthesis), the sum of two parts is bigger, if they are connected.

Soviet montage theory is an approach to understanding and creating cinema that relies heavily upon editing (montage is French for "putting together"). Although Soviet filmmakers in the 1920s disagreed about how exactly to view montage, Sergei Eisenstein marked a note of accord in "A Dialectic Approach to Film Form" when he noted that montage is "the nerve of cinema," and that "to determine the nature of montage is to solve the specific problem of cinema."

While several Soviet filmmakers, such as Lev Kuleshov, Dziga Vertov, and Vsevolod Pudovkin put forth explanations of what constitutes the montage effect, Eisenstein's view that "montage is an idea that arises from the collision of independent shots" wherein "each sequential element is perceived not next to the other, but on top of the other" has become most widely accepted.

In formal terms, this style of editing offers discontinuity in graphic qualities, violations of the 180 degree rule, and the creation of impossible spatial matches. It is not concerned with the depiction of a comprehensible spatial or temporal continuity as is found in the classical Hollywood continuity system. It draws attention to temporal ellipses because changes between shots are obvious, less fluid, and non-seamless.

Eisenstein’s montage theories are based on the idea that montage originates in the "collision" between different shots in an illustration of the idea of thesis and antithesis. This basis allowed him to argue that montage is inherently dialectical, thus it should be considered a demonstration of Marxism and Hegelian philosophy. His collisions of shots were based on conflicts of scale, volume, rhythm, motion (speed, as well as direction of movement within the frame), as well as more conceptual values such as class.

Types of Montages

Analytical and Idea - Associative Montages are two major types of montages; the third is primarily concerned with the rhythm rather than juxtapositions. In Analytical Montage, an event is analyzed for its theme and construction. Essential shots are selected and these are synthesized into a precise series of shots that make up a intense event on screen.

  • Analytical Montage: a). Sequential Analytical Montage, b). Sectional Analytical Montage.
  • Idea Associative: a). Comparison Montage, b). Collision Montage.
  • Metric Montage

In Montage an event is condensed into key developmental elements and put in a cause effect sequence. The main event is implied rather than shown. It requires the viewers to apply psychological closure to fill in gaps so than they feel more involved in the scene, the viewer becomes a participant.

The time order never changes - it can only be condensed and intensified. Such sequence helps in plot development and narrative continuity.

Three diagrams above illustrate the steps involved to make a sequential analytical montage. These types of montage represent the key developmental elements in a cause effect sequence of an event.

Diagram: The proposal, the engagement, the Birth of the first child followed by the birth of a second child:- The selected shots are sequenced in order of the actual event according to logic (Cause and Effect)

Illustration: The proposal, the birth of the first child, birth of a second followed by marriage:- If the proper sequence of the event is not maintained then the meaning changes (Change in Meaning)

Sectional Montage

  • The event sections are not arranged along the horizontal time vector (event progression)
  • But along the vertical vector (event intensity and complexity)
  • It arrests one moment in the event. (subjective time, the vertical line)
  • Stretching time duration - opposite to condensing time (cuts)

This shows an event from various view points. It does not follow any particular sequence. It thus shows the various complexities of a particular moment. Unlike the sequential montage, it stops the event from progression temporarily and examines a section of it. The basic order of the shots is still important to establish the point of view. However the shots are rhythmically precise.

It can stress the simultaneity of the event through the split screen or multiple screen montages.
In the first illustration the event is shown from the students point of view one feels bad for the students because they are subjected to a boring lecture this is because of the fact that shots of the students are shown first.

In the second the teacher is shown first hence it adopts a teacher’s point of view.

So the viewers sympathize with the teacher who is delivering a lecture because the students are not that interested.

Idea - Associative Montage

Here two unrelated events are juxtaposed to create a third meaning - developed in the days of silent film era to express ideas and concepts that that could not be shown in a narrative picture sequence. These fall under two categories:

Comparison montage

  • These comprise of shots that are juxtaposed to thematically related events to reinforce a basic theme or idea.
  • Silent films often would juxtapose a shot of a political leader with preening of a peacock’s shot to depict politician’s vanity.
  • Comparison montage acts like an optical illusion to influence perception of the main event.

The Russian filmmaker, Kulshov, conducted several experiments on the aesthetics of montages: to show the impact of juxtaposition and context - he interspersed the expressionless face of an actor with unrelated shots of emotional value like a child playing, a plate of soup, and a dead woman – the viewers thought that they were seeing the actor’s reaction to the event.

The television advertisements often use this technique to send forth complex messages quickly across to the viewers, e.g. a running tiger dissolves into a car gliding on the road – a hyperbole signifying car having the strength, agility, and grace of a tiger.

Collision montage

Two events collide to enforce a concept feeling or idea. The conflict creates tension.

Comparison Montage: These comprise of shots that are juxtaposed to thematically related events to rein enforce a basic theme or idea. Thematic related events are compared to reinforce a general theme.

In olden days these were used in silent films for example they would show a shot of a political leader juxtaposed with a shot of preening of a peacock to show that the man was very vain.

In the following illustration the first picture is of a dog looking for food, it is juxtaposed with a homeless person doing the same. This shows that the poor are being neglected by the society.

What's wrong with this picture?

In comparison montages the multiple screens that contain simultaneous collision montages can be shown. This is done in news, various types of information is given on screen enough of care must be exercised otherwise inaccurate message may be given to a viewer.

Collision Montage: Two events are collided to enforce a concept feeling or idea. The conflict created tension it betters the experience of the viewers these type of montages should not be too obvious otherwise annoyed rather than involved.

In this montage makes the viewers aware of the plight of the homeless, insensitivity and social injustice.

The Visual Dialectical Principal

The aesthetics principal upon which the collision montage is based is called the visual dialectic this means opposing contradictory statements can be juxtaposed to resolve contradictions to a into universally true axioms.


By juxtaposing a thesis or statement with its antithesis or counterstatement one arrives at a synthesis. In other words a thesis opposed by an thesis results in a new synthesis (a new thesis) in which two opposing conditions are resolved into a higher order statement.

  • Russian film maker Eisenstein frequently used it in not only as a principle task of montage but as a basis for an entire film.

The Metric Montage

  • Editing follows a specific number of frames (based purely on the physical nature of time), cutting to the next shot no matter what is happening within the image.
  • This montage is used to elicit the most basal and emotional of reactions in the audience.
  • This is a rhythmic structuring device a series of related or unrelated images are flashed across the screen at regular intervals.
  • A metric montage is created by cutting a film into equal lengths regardless of colour, content or continuity of shots - one can actually clap the hands to the beat.
  • A tiatery motion is created.
  • Accelerated metric montage the shots become progressively faster it can punctuate a higher point.

'Invisible Editing'

This is the omniscient style of the realist feature films developed in Hollywood. The vast majority of narrative films are now edited in this way. The cuts are intended to be unobtrusive except for special dramatic shots. It supports rather than dominates the narrative: the story and the behaviour of its characters are the centre of attention. The technique gives the impression that the edits are always required are motivated by the events in the 'reality' that the camera is recording rather than the result of a desire to tell a story in a particular way. The editing isn't really 'invisible', but the conventions have become so familiar to visual literates that they no longer consciously notice them.


The dominant system of editing, handed down from the Hollywood tradition, is known as continuity editing, the cuts are invisible in to produce a seamless visual and narrative experience.

Continuity editing involves such techniques as:

  • Continuity editing relies upon matching screen direction, position, and temporal relations from shot to shot.
  • Motivated cuts - If a story is to be told the cuts have to be seamless. This can be achieved by ensuring that the content motivates the cut. For example if one hears a door open and a character turns his head, one expects to see a cut to the door.
  • The 180 Degree Rule - Two characters in the same scene must maintain the same left/right relationship throughout the scene. In other words, if in a particular shot Character A is on the left facing right and Character B is on the right facing left; you should keep the camera positioned so the characters stay facing the same direction. If the camera “crosses the line” between the characters and shoots them from the other side, One end up with a reverse cut where the characters’ positions are switched. Even if you cut to a shot of Character B alone, he should still be on the left facing right. While it’s not essential that you follow the 180 degree rule, most directors do so in order to avoid disorienting the viewer.
  • Shot-reverse-shot structuring that obeys the 180 degree rule -positing an artificial line which the camera cannot cross, thereby creating the illusion of a unified space across shots.
    Cuts on action -creating the illusion of continuous motion from one shot to the next. The reason behind this rule is that cutting on action distracts the audience less. People focus on the action occurring, not the cut, and thus are less likely to notice any mistakes like jump cuts. For example, if a woman turns her head to look at something, the cut to the object of interest should be made midway through the action of turning.
  • Eye-line match -in which the look of a character is matched spatially to what he or she is looking at.
  • Sound bridging-in which continuous music or sound is used to bridge the cuts between shots, among other techniques.

In this sequence from Neighbours (Buster Keaton, 1920), continuity is maintained by the spatial and temporal contiguity of the shots and the preservation of direction between world and screen. More importantly, the shots are matched on Keaton's actions as he shuttles across the courtyard from stairwell to stairwell.

In the Hollywood continuity editing system the angle of the camera axis to the axis of action usually changes by more than 30 ° between two shots, for example in a conversation scene rendered as a series of shot/reverse shots. The 180° line is not usually crossed unless the transition is smoothed by a POV shot or a re-establishing shot.

Visible Vs. Invisible Technique

  • The majority primarily prefer the standard conventions of continuity editing.
  • The classical narrative mode refers to the narrative style common in films of the classical Hollywood period from the 1940's to the 1960's. These films came from the studio system and its concern for commercial success. Despite the different conventions associated with each genre, these films were about escapism and therefore shared the narrative mode favouring the cause and effect linkage of events, there by keeping the audience engrossed in the story. The classical model used continuity editing which is covert, in order to create a unity of time and space, and tell the story without drawing attention the films as something that has been constructed.
  • The invisible technique comes across as lacking knowledge and careless inexperienced crew.
  • People generally believe that a character should be recognisable through out a film, images that evoke feelings of ambiguity and uncertainty in the minds of the viewer without character and plot irritates the viewing experience of the audiences.
  • Certain filmmakers also assume that usage of an alternate language is a sign of ignorance on the part of the camera, as the conventions are not understood by them they feel it is safer to use the classical Hollywood style because it is a tried and tested method that ensures attracting and sustaining the audiences interest, it keeps them absorbed in the story.

The basic concept is to create an illusion of continuity while leaving out parts of the action that slow the film's pacing.

Each story has to have a beginning, middle and an end int the minds of the audience that is what they expect to se however in reality life is unpredictable and uncertain there is confusing activity that does not make sense at times thus while employing the visible technique the movie becomes more real.

Editing Guidelines – Irrespective of the Technique

  • Video professionals know that production techniques are best when they are transparent; i.e., when they go unnoticed by the average viewer.
  • However, in music videos, commercials, and program introductions, we are in an era where production (primarily editing) techniques are being used as a kind of "eye candy" to mesmerize audiences.

Guideline #1: Edits work best when they are motivated.

  • In making any cut or transition from one shot to another there is a risk of breaking audience concentration and subtly pulling attention away from the story or subject matter.
  • When cuts or transitions are motivated by production content they are more apt to go unnoticed. For example, if someone glances to one side during a dramatic scene, we can use that as motivation to cut to whatever has caught the actor's attention.
  • When one person stops talking and another starts that provides the motivation to make a cut from one person to the other.
  • If we hear a door open, or someone calls out from off-camera, we generally expect to see a shot of whoever it is. If someone picks up a strange object to examine it, it's natural to cut to an insert shot of the object.

Guideline # 2: Whenever possible cut on subject movement.

If cuts are prompted by action, that action will divert attention from the cut, making the transition more fluid. Small jump cuts are also less noticeable because viewers are caught up in the action.

If a man is getting out of a chair, you can cut at the midpoint in the action. In this case some of the action will be included in both shots. In cutting, keep the 30-degree rule in mind.

Maintaining Consistency in Action and Detail

Editing for single-camera production requires great attention to detail. Directors will generally give the editor more than one take of each scene. Not only should the relative position of feet or hands, etc., in both shots match, but also the general energy level of voices and movements.

There is also the need to make sure nothing has changed in the scene -- hair, clothing, the placement of props, etc. and that the talent is doing the same thing in exactly the same way in each shot.

Note in the photos below that if we cut from the close-up of the woman talking to the four-shot on the right, that the angle of her face changes along with the lighting. (Because of the location of the window, we would assume the key light would be on our left.)

These things represent clear continuity problems -- made all the more apparent in this case because our eyes would be focused on the woman in red.

Part of the art of acting is in to maintain absolute consistency between takes.

This means that during each take talent must remember to synchronize moves and gestures with specific words in the dialogue. Otherwise, it will be difficult, if not impossible, to cut directly between these takes during editing.

It's the Continuity Director's job to see not only that the actor's clothes, jewelry, hair, make-up, etc., remain consistent between takes, but that props (movable objects on the set) also remain consistent.

It's easy for an object on the set to be picked up at the end of one scene or take and then be put down in a different place before the camera rolls on the next take. When the scenes are then edited together, the object will then seem to disappear, or instantly jump from one place to another.

Discounting the fact that one would not want to cut between two shots that are very similar, do you see any problem in cutting between the two shots above?

The obvious disappearance of her earrings and a difference in color balance, but did you notice the change in the direction of the key light and the position of the hair on her forehead?

Entering and Exiting the Frame

As an editor, you often must cut from one scene as someone exits the frame on the right and then cut to another scene as the person enters another shot from the left.

It's best to cut out of the first scene as the person's eyes pass the edge of the frame, and then cut to the second scene about six frames before the person's eyes enter the frame of the next scene.

The timing is significant.

It takes about a quarter of a second for viewers' eyes to switch from one side of the frame to the other. During this time, whatever is taking place on the screen becomes a bit scrambled and viewers need a bit of time to refocus on the new action. Otherwise, the lost interval can create a kind of subtle jump in the action.

Like a good magician that can take your attention off something they don't want you to see, an editor can use distractions in the scene to cover the slight mismatches in action that inevitably arise in single-camera production.

An editor knows that when someone in a scene is talking, attention is generally focused on the person's mouth or eyes, and a viewer will tend to miss inconsistencies in other parts of the scene.
Or, as we've seen, scenes can be added to divert attention. Remember the role insert shots and cutaways can play in covering jump cuts.

Guideline # 3: Keep in Mind the Strengths and Limitations of the Medium. Remember:

An editor must remember that a significant amount of picture detail is lost in video images, especially in the 525- and 625-line television systems.

  • The only way to show needed details is through close-ups.

Except for establishing shots designed to momentarily orient the audience to subject placement, the director and the editor should emphasize medium shots and close-ups.

There are some things to keep in mind in this regard.

Close-ups on individuals are appropriate for interviews and dramas, but not as appropriate for light comedy. In comedy the use of medium shots keeps the mood light. You normally don't want to pull the audience into the actors' thoughts and emotions.

In contrast, in interviews and dramatic productions it's generally desirable to use close-ups to zero-in on a subject's reactions and provide clues to the person's general character.

  • In dramatic productions a director often wants to communicate something of what's going on within the mind of an actor. In each of these instances, the judicious and revealing use of close-ups can be important.

A List of Contemporary Montage sequences

Many films are well known for their montage scenes. Examples include:

  • The training regimen montages in Sylvester Stallone's Rocky series of movies and later, a parody by Budweiser in a 2008 Super Bowl commercial in which a Dalmatian coaches a Clydesdale horse.
  • The Takashi Miike film Dead or Alive features a highly kinetic opening montage where several main characters are obliquely shown conducting various actions.
  • Dirty Dancing
  • Flashdance
  • several of director Sam Raimi's films
  • Ghostbusters
  • the "Hakuna Matata" scene from The Lion King, where Simba grows from lion cub to adult
    Scarface's montage showing Tony Montana's rise to power, set to the song "Scarface (Push It to the Limit)"
  • Several training montages in Chariots of Fire and Cool Runnings
  • In one montage in Dave, presidential look alike Dave Kovic (Kevin Kline) learns the job of President; in another, he makes public appearances.
  • In a montage in Legally Blonde, Elle (Reese Witherspoon) studies for the LSAT and, at the same time, the admissions committee of Harvard Law School views her admissions video essay. In another, she buckles down studying her law school subjects.
  • In Prince of Tides, Nick Nolte coaches Jason Gould in football, set to the Minuet of the Symphony No. 104 in D major, London by Haydn.
  • In Heaven Can Wait, Warren Beatty trains in football, set to the Sonata #3 of Handel.
  • In Groundhog Day's repeated courtship sequence
  • In the Director's Cut of The Abyss, the Non-terrestrial Intelligences justify their intended deluge of the human race by showing Bud a video montage of human atrocities.
  • The film Good Morning Vietnam has a montage of violence, set, ironically, to What a Wonderful World, by Louis Armstrong. A similar montage is featured in Bowling for Columbine.
  • Satirical self-referential montages in the South Park episode "Asspen" and the film Team America: World Police.
  • Requiem for a Dream uses several montage sequences during portions of the film where the characters use drugs.
  • In an episode of "Family Guy", the dog, Brian, goes through a montage training for a final exam by excercising (as a parody), with the background music saying, "Everybody needs a montage."
  • In 1985's Real Genius, a montage is used to demonstrate the lapse of time as the students work on their laser and study for their classes.

In nearly all of these examples, the montages are used to compress narrative time and show the main character learning or improving skills that will help achieve the ultimate goal.



  • Television Production Handbook by Herbert Zettl
  • Television Production, Thirteenth Edition by Gerald Millerson
  • Directing and Producing for Television, Third Edition: A Format Approach by Ivan Cury
  • Fundamentals of Television Production (2nd Edition) by Ralph Donald, Riley Maynard, and Thomas D. Spann
  • Montage (Cinema Aesthetics) by Sam Rohdie
  • Modernist Montage: The Obscurity of Vision in Cinema and Literature by P. Adams. Sitney
  • Film Theory and Criticism: Introductory Readings by Leo Braudy and Marshall Cohen
  • Cinematic Storytelling: The 100 Most Powerful Film Conventions Every Filmmaker Must Know by Jennifer Van Sijll
  • Sight, Sound, Motion: Applied Media Aesthetics by Herbert Zettl
  • Picture Composition for Film and Television, Second Edition by PETER WARD
  • Composition: The Anatomy of Picture Making by Harry Sternberg

Saturday, 11 October 2008

Some Terms for Understanding of Editing

180° rule. The rule is a basic film editing guideline that states that two characters (or other elements) in the same scene should always have the same left/right relationship to each other. If the camera passes over the imaginary axis connecting the two subjects, it is called crossing the line. The new shot, from the opposite side, is known as a reverse angle.

Buffer shot (neutral shot). A bridging shot (normally taken with a separate camera) to separate two shots which would have reversed the continuity of direction.

Continuity editing. It is the predominant style of editing in narrative cinema and television. The purpose of continuity editing is to smooth over the inherent discontinuity of the editing process and to establish a logical coherence between shots. In most films, logical coherence is achieved by cutting to continuity, which emphasizes smooth transition of time and space. However, some films incorporate cutting to continuity into a more complex classical cutting technique, one which also tries to show psychological continuity of shots. The radical montage technique relies on symbolic association of ideas between shots rather than association of simple physical action for its continuity. It is What became known as the popular 'classical Hollywood' style of editing was developed by early European and American directors, in particular D.W. Griffith in his films such as The Birth of a Nation and Intolerance. The classical style ensures temporal and spatial continuity as a way of advancing narrative, using such techniques as the 180 degree rule, establishing shot, and Shot reverse shot.

Cross-cut. A cut from one line of action to another. Also applied as an adjectuve to sequences which use such cuts.

Cut. Sudden change of shot from one viewpoint or location to another. On television cuts occur on average about every 7 or 8 seconds. Cutting may:
  • change the scene;
  • compress time;
  • vary the point of view; or
  • build up an image or idea.
  • There is always a reason for a cut, and you should ask yourself what the reason is. Less abrupt transitions are achieved with the fade, dissolve, and wipe
Cutaway/cutaway shot (CA). A bridging, intercut shot between two shots of the same subject. It represents a secondary activity occurring at the same time as the main action. It may be preceded by a definite look or glance out of frame by a participant, or it may show something of which those in the preceding shot are unaware. (See narrative style: parallel development) It may be used to avoid the technical ugliness of a 'jump cut' where there would be uncomfortable jumps in time, place or viewpoint. It is often used to shortcut the passing of time.

Cutting rate. Frequent cuts may be used as deliberate interruptions to shock, surprise or emphasize.

Cutting rhythm. A cutting rhythm may be progressively shortened to increase tension. Cutting rhythm may create an exciting, lyrical or staccato effect in the viewer.

Editing. Shaping language, images, or sound through correction, condensation, organization, and other modifications in various media. A person who edits is called an editor. In a sense, the editing process originates with the idea for the work itself and continues in the relationship between the author and the editor. Editing is, therefore, also a practice that includes creative skills, human relations, and a precise set of methods.

Establishing shot. In film and television, an establishing shot sets up, or "establishes", a scene's setting and/or its participants. Typically it is a shot at the beginning (or, occasionally, end) of a scene indicating where, and sometimes when, the remainder of the scene takes place.

Fade, dissolve (mix). Both fades and dissolves are gradual transitions between shots. In a fade the picture gradually appears from (fades in) or disappears to (fades out) a blank screen. A slow fade-in is a quiet introduction to a scene; a slow fade-out is a peaceful ending. Time lapses are often suggested by a slow fade-out and fade-in. A dissolve (or mix) involves fading out one picture while fading up another on top of it. The impression is of an image merging into and then becoming another. A slow mix usually suggests differences in time and place. Defocus or ripple dissolves are sometimes used to indicate flashbacks in time.

Film/video editing. It is an art of storytelling practiced by connecting two or more shots together to form a sequence, and the subsequent connecting of sequences to form an entire movie. Film editing is the only art that is unique to cinema and which separates filmmaking from all other art forms that preceded it (such as photography, theatre, dance, writing, and directing). However there are close parallels to the editing process in other art forms such as poetry or novel writing. It is often referred to as the "invisible art," since when it is well-practiced, the viewer becomes so engaged that he or she is not even aware of the work of the editor.

Inset. An inset is a special visual effect whereby a reduced shot is superimposed on the main shot. Often used to reveal a close-up detail of the main shot.

Insert/insert shot. A bridging close-up shot inserted into the larger context, offering an essential detail of the scene (or a re-shooting of the action with a different shot size or angle.)

Intercutting. Here editor cuts back and forth from one subject or event to the other. With this technique, the events appear to be happening at the same time. In parallel editing or parallel cutting, sometimes also called cross-cutting, the sequences or scenes are intercut so as to suggest that they are taking place at the same time. Parallel cutting might show shots of a villain being villainous intercut with shots of the hero or heroine coming to the rescue. Most chases use parallel editing, switching back and forth between pursuer and pursued. Phone conversations, too, are often parallel edited.

Invisible editing. See narrative style and continuity editing.

Jump cut. Abrupt switch from one scene to another which may be used deliberately to make a dramatic point. Sometimes boldly used to begin or end action. Alternatively, it may be result of poor pictorial continuity, perhaps from deleting a section.

Matched cut. In a 'matched cut' a familiar relationship between the shots may make the change seem smooth:

  • continuity of direction;
  • completed action;*
  • a similar centre of attention in the frame;
  • a one-step change of shot size (e.g. long to medium);
  • a change of angle (conventionally at least 30 degrees).
  • *The cut is usually made on an action (for example, a person begins to turn towards a door in one shot; the next shot, taken from the doorway, catches him completing the turn). Because the viewer's eye is absorbed by the action he is unlikely to notice the movement of the cut itself.

Motivated cut. Cut made just at the point where what has occurred makes the viewer immediately want to see something which is not currently visible (causing us, for instance, to accept compression of time). A typical feature is the shot/reverse shot technique (cuts coinciding with changes of speaker). Editing and camera work appear to be determined by the action. It is intimately associated with the 'privileged point of view' (see narrative style: objectivity).

Narrative mode. (also called narrative voice, narrative point of view, or mode of narration) It is any method through which the author(s) of a literary, theatrical, cinematic, or musical piece conveys his/her/their story to the audience. It refers to through which person's perspective the story is viewed and, also, how it is expressed to the audience. Whoever this person is, he or she is regarded as the "narrator," a character developed by the author for the specific purpose of conveying the story. The narrative point-of-view is meant to be the related experience of the character of this narrator—not that of the actual author (although, in some cases, especially in non-fiction, it is possible for the narrator and author to be the same person). In addition to through whom the story is told or seen, the narrative mode employed may also construct how the story is described or expressed, for example by using stream of consciousness or unreliable narration.

Narrative structure. It is generally described as the structural framework that underlies the order and manner in which a narrative is presented to a reader, listener, or viewer.

Narrative style. To understand style part it is important to understand the narrative. A narrative or story is a construct created in a suitable format (written, spoken, poetry, prose, images, song, theatre, or dance) that describes a sequence of fictional or non-fictional events. The word "story" may be used as a synonym of "narrative", but can also be used to refer to the sequence of events described in a narrative. A narrative can also be told by a character within a larger narrative.

Parallel editing. Editing that alternates shots of two or more lines of action occurring in different places, usually simultaneously. The two actions are therefore linked, associating the characters from both lines of action.

Reaction shot. Any shot, usually a cutaway, in which a participant reacts to action which has just occurred.

Reverse cut/crossing the line. Crossing the line is a very important concept in video and film production. It refers to an imaginary line which cuts through the middle of the scene, from side to side with respect to the camera. Crossing the line changes the viewer's perspective in such as way that it causes disorientation and confusion. For this reason, crossing the line is something to be avoided.

Shot reverse shot. A shot/counter-shot in a film technique wherein one character is shown looking (often off-screen) at another character, and then the other character is shown looking "back" at the first character. Since the characters are shown facing in opposite directions, the viewer unconsciously assumes that they are looking at each other. Shot reverse shot is a feature of the "classical" Hollywood style of continuity editing, which deemphasizes transitions between shots such that the audience perceives one continuous action that develops linearly, chronologically, and logically.

Split screen. The division of the screen into parts which can show the viewer several images at the same time (sometimes the same action from slightly different perspectives, sometimes similar actions at different times). This can convey the excitement and frenzy of certain activities, but it can also overload the viewer.

Stock shot. Footage already available and used for another purpose than the one for which it was originally filmed.

Superimpositions. Two of more images placed directly over each other (e.g. and eye and a camera lens to create a visual metaphor).

Wipe. An optical effect marking a transition between two shots. It appears to supplant an image by wiping it off the screen (as a line or in some complex pattern, such as by appearing to turn a page). The wipe is a technique which draws attention to itself and acts as a clear marker of change.

Thursday, 9 October 2008

Introduction to Video Recording



Q1 Define Video Recording and identify some of its advantages.

Ans) Video Recording is the technology of electronically capturing, recording, processing, storing, transmitting, and reconstructing a sequence of still images representing scenes in motion.

Its advantages:
The quality of video tapes programs is indistinguishable from the original picture and sound with an excellent broadcast quality. The tapes can be reused and also duplicated without loss of quality in picture or sound. The video tapes can also be replayed immediately and the recording can be analyzed by the technicians and the directors. This leads to saving of time. Another major advantage is that the video tape’s picture and sound can be edited or modified separately. Unwanted or faulty sections can be deleted and also replaced by some other material. Video Taped Programs can be easily stored as an entire program or single sections or even as still shots that can be reused and manipulated for further usage. The tapes are also not prone to damage therefore have longer lives.

Q2 What are the different types of Video Systems in use and explain there working.

Ans) There are three types of systems used: BETAMAX, VHS, HI8

Betamax: In the Betamax system, the video tape is guided along the head drum in a U-shape for all tape guidance functions, such as recording, playback and fast forward/backward. When the cassette is inserted, the tape is guided around the head drum (called threading). Threading the tape takes a few seconds, but once the tape is threaded, shifting from one tape function to another can be achieved rapidly and smoothly.

VHS: JVC's VHS System was introduced one year after the launch of Betamax. In VHS, the tape is guided through in an M-shape; the so-called M-tape guidance system. It is considered simpler and more compact than the U-system. Threading is faster and is done every time the tape guidance function is changed. It is therefore somewhat slower and noisier than the U-system. This problem is being solved by "Quick-start" VHS video recorders, which allow fast and silent changes in tape guidance functions. To avoid excessive wear, M-tape guidance system recorders are provided with an automatic switch-off feature, activated some minutes after the recorder is put on hold, which automatically unthreads the tape. An improvement of the basic VHS system is HQ (High Quality) VHS.

In the VHS system different starting points were used than in Betamax, such as track size and relative speed. VHS has rather wide video tracks, but a slightly lower relative tape speed, and that also counts for the audio track. In general, the advantages of one aspect are tempered by the disadvantages of the other. The end result is that there is not too much difference between the sound and image qualities of both systems.

HI8: As a direct addition to the Video-8 camcorders, there is a third system: Video Hi8, which uses a smaller cassette than VHS and Betamax. The sound recording takes place digitally, making its sound quality very good. When using the special Hi8 Metal Tape, the quality of both image and sound are equivalent to that of Super-VHS. The Video-Hi8-recorder can also be used to make audio recordings (digital stereo) only. Using a 90 minute cassette, one can record 6 x 90 minutes, making a total of 18 hours of continuous music. The video Hi8-system also allows manipulating digital images, such as picture-in-picture and editing. Video Hi8 uses a combination of the M- and U-tape guidance system.

Q3 Describe the process of Sound Recording.

Ans) In case of a mono video recorder, the audio signal which corresponds with the image is transferred to a separate, fixed audio head. As in an audio cassette deck, this head writes an audio track in longitudinal direction of the tape. This is called linear or longitudinal track recording.

The video recorder has two erase heads. One is a wide erase head covering the whole tape width which automatically erases all existing image, synchronization and sound information when a new recording is made.

The other erase head is smaller and positioned at the position of the audio track. With this erase head, the soundtrack can be erased separately, without affecting the video information. In this way, separate audio can be added to a video recording. This is called audio dubbing, and can be particularly useful when making your own camera recordings. The linear audio track does have some restrictions. Due to its low tape speed, it is not suitable for hi-fi recordings. Moreover, the audio track is so narrow (0.7 mm for VHS and 1.04 mm for Betamax) that not even stereo sound can be recorded properly.

The frequency range is limited as is the dynamic range (which relates to the amount of decibels), and the signal-to-noise ratio is not very high.(The signal-to-noise ratio relates to amount of noise compared to the total signal. The higher this ratio, the less noise and the better the signal will be).

Hi-fi video recorders were developed for improved sound quality. In the case of hi-fi, the audio signal is also put on tape via revolving heads similar to the video signal, not on the linear track.

As there is no space between the video tracks, as the video tracks lie right next to each other with no space in between, the audio tracks need to be recorded in the same place as the video tracks. The way this is realized is by recording the audio signal under (deeper than) the video signal. In hi-fi video recorders, the audio signal is modulated to a high carrier frequency. This is realized via FM modulation, with the right channel stereo signal at a slightly higher frequency than the left channel.

The corresponding video and audio signals are written to tape immediately after each other. First the FM audio signal is registered at a deep level in the tape's magnetic coating. Straight after the audio signal, the video signal is recorded. As the frequency of the video signal is higher than the audio signal, it will not register as deep in the tape coating as the audio signal. The video signal erases the audio signal in the top layer and records the video signal instead. Thus, the audio and video signal tracks are written in the same magnetic layer, separately, one on top of the other. A hi-fi video recorder is also suitable as a high-quality audio recorder, not only because of the professional recording quality, but also because of the long play possibilities and the low recording costs. A hi-fi-video recorder needs to be tuned very accurately. As the two rotating audio heads function alternately, the recorded sound consists of successive particles and need to fit together perfectly. If they do not, the result is rumble, which is a humming sound. In high quality, well-tuned hi-fi video recorders you will not hear this sound.

Q4 What are Camcorders and what are the formats being used by them?

Ans) A camcorder is a portable electronic device for recording video images and audio onto an internal storage device. The camcorder contains both a video camera and (traditionally) a videocassette recorder in one unit. Camcorders are often classified by their storage device: VHS, Betamax, Video8 are examples of older, videotape-based camcorders which record video in analog form. Newer camcorders include Digital8, miniDV, DVD, Hard drive and solid-state (flash) semiconductor memory, which all record video in digital form.

MiniDV is now the most popular format for tape-based consumer camcorders, providing near-broadcast quality video and sophisticated nonlinear editing capability on consumer equipment. MiniDV storage allows full resolution video (720x576 for PAL, 720x480 for NTSC), much unlike the analogue video standards before. Digital video doesn't experience colour bleeding or fade. There has been a trend, largely spearheaded by Hitachi, Panasonic, and Sony, to sell consumer camcorders based on optical discs rather than tape. Most common are DVD recordable camcorders, which are common among point and shoot users due to the ability to take a disc out of the camcorder and drop it directly into a DVD player, much like VHS-C on the analog side. However, professionals consider DVD media to be too inflexible for easy editing.

Q5 What are the two different image capture formats?

Ans) Digital video cameras come in two different image capture formats: interlaced and progressive scan. Interlaced cameras record the image in alternating sets of lines: the odd-numbered lines are scanned, and then the even-numbered lines are scanned, then the odd-numbered lines are scanned again, and so on. One set of odd or even lines is referred to as a "field", and a consecutive pairing of two fields of opposite parity is called a frame. A progressive scanning digital video camera records each frame as distinct, with both fields being identical.

Thus, interlaced video captures twice as many fields per second as progressive video does when both operate at the same number of frames per second. This is one of the reasons video has a “hyper-real” look, because it draws a different image 60 times per second, as opposed to film, which records 24 or 25 progressive frames per second. Progressive scan camcorders such as the Panasonic DVX100 are generally more desirable because of the similarities they share with film. They both record frames progressively, which results in a crisper image. They can both shoot at 24 frames per second, which results in motion strobing (blurring of the subject when fast movement occurs). Thus so, progressive scanning video cameras tend to be more expensive than their interlaced counterparts. (Note that even though the digital video format only allows for 29.97 interlaced frames per second [or 25 for PAL], 24 frames per second progressive video is possible by displaying identical fields for each frame, and displaying 3 fields of an identical image for certain frames.

Q6 What is Video Compression and explain its different types.

Ans) Video compression refers to reducing the quantity of data used to represent video content without excessively reducing the quality of the picture. It also reduces the number of bits required to store and/or transmit digital media. Compressed video can be transmitted more economically over a smaller carrier.

Digital video requires high data rates - the better the picture, the more data is ordinarily needed. This means powerful hardware, and lots of bandwidth when video is transmitted. However much of the data in video is not necessary for achieving good perceptual quality, e.g., because it can be easily predicted - for example, successive frames in a movie rarely change much from one to the next - this makes data compression work well with video. Video compression can make video files far smaller with little perceptible loss in quality.

Some forms of data compression are lossless. This means that when the data is decompressed, the result is a bit-for-bit perfect match with the original. While lossless compression of video is possible, it is rarely used. This is because any lossless compression system will sometimes result in a file (or portions of) that is as large and/or has the same data rate as the uncompressed original. As a result, all hardware in a lossless system would have to be able to run fast enough to handle uncompressed video as well. This eliminates much of the benefit of compressing the data in the first place. If the inverse of the process, decompression, produces an exact replica of the original data then the compression is lossless.

Lossy compression, usually applied to image data, does not allow reproduction of an exact replica of the original image, but has a higher compression ratio. Thus lossy compression allows only an approximation of the original to be generated.

The size of the data in compressed form (C) relative to the original size (O) is known as the compression ratio (R=C/O). For image compression, the fidelity of the approximation usually decreases as the compression ratio increases.The success of data compression depends largely on the data itself and some data types are inherently more compressible than others. Generally some elements within the data are more common than others and most compression algorithms exploit this property, known as redundancy. The greater the redundancy within the data, the more successful the compression of the data is likely to be. Fortunately, digital video contains a great deal of redundancy and thus is very suitable for compression.

Q7 List the major differences between a film and a video.

Ans) Exposure Latitude - A key difference between DV and film is exposure latitude, which affects contrast and detail. Color negative has a usable exposure range of 7 stops, with normal exposure approximately in the middle. Most stocks provide 4 stops overexposure and 3 stops underexposure where detail is still visible.

Video has a usable exposure latitude of 5 stops, providing 2 stops overexposure and 3 stops underexposure where detail is still visible. Exposure beyond the -/+ limits results in tonal compression and is reproduced as either pure white or pure black, respectively. Obviously, there is a loss of detail as well.

Motion Blur - Film yields a slight blur in moving objects. This is known as motion blur and it results in a distinct fluidity of movement-- a prime contributor to the "film look." Motion blur is caused by film's relatively low frame rate of 24 frames per second. A telltale sign of video is its extreme sharpness and lack of motion blur. There are two interlaced fields for every frame of video, so the effective rate is actually 60 images per second (= 30 fps x 2 fields). This virtually eliminates motion blur, creating an image that is a bit too sharp and devoid of fluidity (the dreaded "video look").

The answer to this is a technical breakthrough called progressive scanning, where each frame is scanned once. In other words, the frame is scanned as a single field, with no interlacing. The lower image rate reproduces motion blur comparable to film. Another benefit of progressive scanning is a dramatic increase in resolution. This occurs because progressive scanning eliminates interlace artifacts (combed edges in movement) and interline flicker (noise in fine patterns).

Resolution - The final difference between video and film is resolution. Many filmmakers erroneously assume that film is far superior across the board. Arguably, the disparity in resolution has less of an impact on the look of DV than exposure latitude and motion blur. It is not noticeable to the average audience, except of when aliasing rears its ugly head. Aliasing can be minimized by avoiding fine patterns, particularly checkered and striped clothing.

DV has an interesting advantage over film that may, in part, make up for its lower resolution. It can "see" in low light almost like the human eye and captures beautiful images during sunrise and sunset.

Q8 What is the major difference between Analog and Digital Video Recording?

Ans) The analog recording method stores signals as a continual wave in/on the media, rather than the discrete numbers used in digital recording. The wave is stored as a physical texture on a phonograph record, or a fluctuation in the field strength of a magnetic recording. In an analog system the continuously varying voltage magnetizes tape particles in a continuously varying pattern that mirrors the signal. On playback, the tape particles create a continuously varying output signal that continues to mirror the original.

Every transfer of the picture information is an imitation--or, more precisely, an imitation of an imitation, with consequences that we'll see shortly.In a digital system, by contrast, the first thing that happens to the original continuous signal is that it's fed through an analog/digital converter chip. That chip looks at the signal hundreds of thousands of separate times per second and assigns each discrete sampling a numerical value that corresponds to the strength of the signal at that precise moment in time. These numbers, rather than the signal itself, are copied and recopied throughout the rest of the process.

Q9 What are the advantages of Analog over digital and vice versa?

Ans) Analog over Digital

ScalabilityAll video, analog and digital, tends to look sharper and clearer on a smaller screen; it's the natural result of squeezing the same amount of visual information into a smaller space. All but the highest quality digital video, however, suffers greatly from enlargement. When you blow up your digitized image onto a huge home-theater TV screen, for example, all of those invisible digital compression artifacts become quite noticeable--straight lines become jaggy, curves look blocky, etc. Analog video, on the other hand, is much better at filling larger screens with sharp-looking images.

SeamlessnessIn the audio world, some purists have returned to analog (vinyl LP) recordings because they can hear the fact that digital recordings only sample the signal at intervals instead of copying the whole thing. To them, CDs sound hollow and brittle in consequence.

Digital over Analog: Instead of copying the video signal, digital duplication transcribes the numerical code that describes that signal. If you transcribe it accurately (and computers are outstanding at chores like that), you can decode the result into a daughter signal that is essentially indistinguishable from the parent.

Freedom from Noise

Noise is any disturbance in an electrical current that is not part of the signal, and every current carries a certain amount of this electrical disturbance. Since an analog dupe is an imitation, it copies the noise right along with the parent signal, while adding new noise in the process. That means that in each generation, the noise level relative to the signal (signal-to-noise ratio) increases and the quality decreases proportionately. In digital recording, noise is not a problem because the signal consists entirely of current pulses carrying information like Morse code: power on = 1; power off = 0. If the voltage level of the "power on" part of the signal is well above the noise level, then the transcribing (copying) system can be set to respond only to current at that level and ignore the noise entirely. So even if the process adds a small amount of its own noise, it never copies the parental noise--nor does it pass on its own noise to the grandchildren. The result is that digital video can be copied through many generations without appreciable quality loss. This is a massive improvement over analog video (and even over cinematic film, which is another analog medium).

Computer Compatibility

By far the biggest advantage of digital video is that a computer can process and store it. Computers are astonishingly powerful but they cannot work with pictures, or more accurately, with the continuously varying wave forms that record them. Before you can get your computer to handle or even recognize video input, you have to digitize the video. For many years, professionals have digitized video, not only to take advantage of loss-free duplicating, but also to perform image processing. Image processing means superimposing titles, compositing multiple images, and adding effects like dissolves and wipes. In image processing, digital is an ephemeral state: an analog signal is digitized, massaged for a few microseconds at most, and immediately reconverted to analog.

But as hard drives got bigger and faster, and as image compression techniques improved, it became possible to digitize the signal and then keep it in that form indefinitely by storing it in the computer.

Q10 Describe the process of Helical Scanning.

Ans) Helical scan or striping is a method of recording higher bandwidth signals onto magnetic tape than would otherwise be possible at the same tape speed with fixed heads. It is used in video cassette recorders, digital audio tape recorders, and numerous computer secondary storage and backup systems. In a fixed head system, tape is drawn past the head at a linear speed. The head creates a fluctuating magnetic field in response to the signal to be recorded, and the magnetic particles on the tape are forced to line up with the field at the head.

As the tape moves away, the magnetic particles carry an imprint of the signal in their magnetic orientation. If the tape moves too slowly, a high frequency signal will not be imprinted — the particles' polarity will simply oscillate in the vicinity of the head, to be left in a random position. Thus the bandwidth capacity of the recorded signal can be seen to be related to tape speed — the faster the speed, the higher the frequency that can be recorded.

Video and digital audio need considerably more bandwidth than analog audio, so much so that tape would have to be drawn past the heads at very high speed in order to capture this signal. Clearly this is impractical, since tapes of immense length would be required. (However, see VERA for details of a partially-successful linear videotape system.) The generally adopted solution is to rotate the head against the tape at high speed, so that the relative velocity is high, but the tape itself moves at a slow speed. To accomplish this, the head must be tilted so that at each rotation of the head, a new area of tape is brought into play; each segment of the signal is recorded as a diagonal stripe across the tape. This is known as a helical scan because the tape wraps around the circular drum at an angle, traveling up like a helix.

Wednesday, 8 October 2008

Introduction to basic 'Structuralism' in Pscyhology

Question: What is structuralism?

Answer: Structuralism is general approach in various academic disciplines that seeks to explore the inter-relationships between some fundamental elements, upon which higher mental, linguistic, social, cultural etc "structures" are built, through which then meaning is produced within a particular person, system, culture.

Structuralism appeared in academic psychology for the first time in 19th century and then reappeared in the second half of the 20th century, when it grew to become one of the most popular approaches in the academic fields that are concerned with analyzing language, culture, and society. Ferdinand de Saussure is generally considered a starting point of the 20th century structuralism. As with any cultural movement, the influences and developments are complex.

Structuralism in psychology (19th century)At the turn of 19th century the founding father of experimental psychology Wilhelm Wundt tried to experimentally confirm his hypothesis that conscious mental life can be broken down into fundamental elements which then form more complex mental structures. Wundt's structuralism was quickly abandoned because it could not be tested in the same way as behavior, until now, when the brain-scanning technology can identify, for example, specialized brain cells that respond exclusively to basic lines and shapes and are then combined in subsequent brain areas where more complex visual structures are formed. This line of research in modern psychology is called cognitive psychology rather then structuralism because Wundt's term never ceased to be associated with the problem of observability.