Results 1 to 5 of 5

Thread: CCTV Basics - For the Youngsters

  1. #1
    Senior Member
    intelliGEORGE's Avatar
    Join Date
    Jan 2008
    Location
    Sydney, AUSTRALIA
    Age
    43
    Posts
    4,106
    Thanks
    884
    Thanked 1,484 Times in 691 Posts
    Rep Power
    479
    Reputation
    7236

    Default CCTV Basics - For the Youngsters

    1. Use solid core co-axial cable only, not stranded cable. The solid core must have a copper core with copper shield.

    2. Avoid high voltage cable. A good rule to follow is: for every 100 volts there should be a separation of 1ft between the video cable and power cable.

    3. While cabling, avoid areas like electrical equipment or transmitter rooms etc., where EMI interference is expected. This can create all types of interference to the video picture. Co-axial cable is very easily prone to EMI.

    4. Minimize cable breaks - Every extra connection in the cable can deteriorate the quality of the video signal. If unavoidable, make sure the insulation is good; otherwise over time the exposed cable can touch the ground causing ground loop currents. It may be difficult or expensive to fix such problems in the future.

    5. Avoid sharp bends, which affects the cable impedance causing picture reflection and distortion. This is especially true while getting all the cable into the CCTV monitor rack.

    6. Poor BNC connections are the major cause of poor picture quality. Also BNC connectors should be replaced every couple of years and should be part of the system maintenance program.

    7. Use metal conduits for high security applications.

    8. Use heavy-duty cable for outdoor applications providing better protection against the elements.


    IP Addressing

    Every device connected to the network that uses the TCP/IP protocol has a unique IP address.

    IP Address = Internet Protocol Address.

    In the current version 4.0, the IP address is made up of four sets of numbers separated by dots. Example: 131.103.243.192. Each number set is one byte or 8 bits long. In other words the IP address is 4 bytes or 32 bits long.
    Since each number set is 8 bits long, it covers a number range of 0 to 255.

    Therefore the max number of an IP address is 255.255.255.255

    Parts of an IP address

    The IP address has two parts. One part is the network address, while the second part gives the device address within the network. The IP address can be compared to the mailing address

    Network address = Zip Code
    Device address = Street or PO Box address.

    The identification of the network and device address within the IP address depends upon the classification of the network.

    Class A:
    The first number set is used to specify the network address, while remaining three number sets specify the device. Address Range: 001.xxx.xxx.xxx to 127.xxx.xxx.xxx

    Example: 81.234.101.56
    All the numbers in this class are already assigned. Government or large commercial organizations have been assigned this range.

    Class B:
    First two number sets indicate the network address, while the balance two indicate the device. Address Range: 128.001.xxx.xxx to 191.254.xxx.xxx

    Example: 144.56.234.101
    This class is assigned to universities, commercial organizations and Internet Service Providers (ISP).

    Class C:
    In this case the first 3 number sets specify the network address, while the remaining number set indicates the device address. Address Range: 192.000.001.xxx to 223.255.254.xxx

    Example: 228.7.8.201
    The maximum number of devices that can be attached to a single network address is 254; it is therefore suitable for smaller networks.

    Shortage of IP address


    The numbers of networks and devices have exploded in the recent past. This means that the availability of IP addresses is getting exhausted. Some

    Options:

    Temporary IP Address;

    One solution to overcome the IP address shortage is to provide temporary address to devices as and when the device connects to the Internet. After the device disconnects, the same address can be given to another device, this how ISP’s operate.

    Reduce Need for IP Address:

    The Router, which is the starting point of the network, has a fixed IP address. All the devices connected to this network use this IP address. The router has the address list of the devices network card (NIC) and uses this address to communicate within the network.

    IP version 6.0


    To overcome the IP address shortage, a new version 6.0 has been implemented. It has 6 number sets separated by dots. The size of the address will be 128 bits. With the introduction of version 6.0, there would 5 classes A, B, C, D and E.

    Resolution

    Resolution is a key specification of any CCTV equipment. It is the quality of definition and clarity of a picture. It is defined in number of lines for an analog signal and number of pixels for a digital signal.

    More lines or pixels = higher resolution = better picture quality.

    Camera resolution depends upon the number of pixels in the CCD chip. If a camera manufacturer can put in more pixels in the same size CCD chip, that camera will have a better resolution. In other words the resolution is directly proportional to the number of pixels in the CCD chip. Any CCTV device has two types of resolution, vertical and horizontal:

    Vertical Resolution

    Vertical resolution = number of horizontal lines or pixels. The vertical resolution cannot be greater the number of TV scanning lines, which is 625 lines for PAL and 525 lines for NTSC. Because some of the lines are lost in the interlacing of fields, the maximum vertical resolution possible as per the Kell factor is 0.75 of the number of horizontal scanning lines. Using this, the maximum vertical resolution possible is
    For PAL 625 X .75 = 470 lines
    For NTSC 525 X .75 = 393 lines

    Vertical resolution is not a critical issue as most camera manufacturers achieve this figure.

    Horizontal Resolution

    Horizontal resolution = number of vertical lines. Theoretically horizontal resolution can be increased infinitely, but the following two factors limit this

    • It may not be technological possible to increase the number of pixels in a chip.
    • As the number of pixels increase in the chip, the pixel size becomes smaller which lowers the sensitivity. There is a trade off between resolution and sensitivity.

    If only one resolution is shown in the data sheet, it usually it is the horizontal resolution.

    Measuring Resolution

    There are different methods to measure resolution:

    1. Resolution Chart

    The camera is focused on a resolution chart and the vertical lines and horizontal lines are measured on the monitor. The resolution measurement is the point were the lines start merging and they cannot be separated.

    Problems

    • The merging point can be subjective as different people perceive it differently
    • The resolution of the monitor must be higher than the camera. This is not a problem with Black and white monitors, but is a problem with many color monitors as they usually have a lower resolution as compared to a color camera.

    2. Bandwidth Method

    This is a scientific method to measure the resolution. The bandwidth of the video signal from the CCTV equipment is measured on an oscilloscope. Multiply this bandwidth by 80 to give the resolution of the camera.

    Example: If the bandwidth is 5 MHz, the camera resolution will be 5 * 80 = 400 lines

    Human Eye and CCTV Technology

    The CCTV and video technology has been designed to meet the characteristics of the human eye. Starting with the camera, the human eye is the final recipient of the video signal. This information will explain how some of the properties of the human eye have made an impact on CCTV or video technology.

    Eye and Persistency of Image

    The human eye and a camera are quite similar. Both have a lens, an iris, and a light sensitive imaging area. In a camera it is the CCD chip, while in the eye it is the retina.

    It is important to understand the Persistency of Image of the human eye. Any image formed by the eye is retained in the Retina for 40 ms (0.004 sec) only and after that it disappears.

    This is known as the persistency of the human eye. For continuity it is necessary that the next frame or image is formed within 40 ms, if not, the human will see discrete frames with no continuity.

    Converting this to frames per sec, it means the human eye requires a minimum of 24 frames per sec for a picture to look continuous. This basic concept was used when PAL and NTSC TV transmission standards were set up.

    NTSC has 30 frames per sec, and is used in USA and Japan.
    PAL has 25 frames per sec, and is popular in Europe and Asia

    On the surface; both these standards meet the minimum requirements, but have an underlying problem. In both PAL and NTSC systems, there is a certain time taken when the first frame comes to an end and the next frame starts. During this time a blank pulse is added.

    Since the PAL and NTSC systems are just above the minimum requirement, the human eye is able to perceive the blank pulse between the frames and this is seen as screen flickering. To overcome this problem, the frame is divided into two fields – odd and even fields. This way the blank pulse appears 50 times (PAL) and 60 times (NTSC) every sec. At this frequency, the human cannot perceive the blank pulse and therefore the screen flickering is avoided.

    This is not an issue with computer monitors because the refresh rate is 100 times per sec and they do not use the PAL or NTSC standards.

    A point of interest - have you seen the moving lines on a computer monitor while watching television? This is because of the different refresh rates of a computer and TV.

    We discussed the concept of persistency of the human eye and why we require at least 25 frames per sec for the moving images to look continuous. Later, we will deal with the sensitivity of the human eye, which in many ways determines the bandwidth of the digital signal and also the video compression techniques used.

  2. The Following User Says Thank You to intelliGEORGE For This Useful Post:

    GamerBoy (12-05-09)



Look Here ->
  • #2
    Senior Member
    intelliGEORGE's Avatar
    Join Date
    Jan 2008
    Location
    Sydney, AUSTRALIA
    Age
    43
    Posts
    4,106
    Thanks
    884
    Thanked 1,484 Times in 691 Posts
    Rep Power
    479
    Reputation
    7236

    Default

    Basic Colors

    It is known that the three basic colors of light are Red, Green and Blue (RGB). These colors are mixed and matched to form all the different colors.
    An analysis of the spectral response of the human eye reveals that it is most sensitive to green light, while the response to red and blue is limited. Based on this finding, the brightness of a picture (Y) can be defined by the following equation:

    Y = 0.3R (Red) + 0.59G (Green) + 0.11B (Blue)

    A composite video signal contains Brightness Y and the basic colors RGB in the color burst. When converting this analog signal into a digital signal, sampling the green signal is not necessary. Only the Brightness, Blue and Red are part of the digital signal. This is also called the YUV (Brightness, Primary color 1, Primary color 2) signal.

    Green is reconstructed by using the above equation
    G = (Y - 0.3R - 0.11B) / 0.59

    This helps reduce the size or bandwidth of the digital signal as only three components are used, instead of four.

    Sampling Colors

    The human eye has 120 million Rods and 8 million Cones. These are like pixels in the CCD chip. A CCD chip only has about 350,000 pixels, meaning a much lower picture quality as compared to the human eye. Rods are sensitive to the brightness of an image while cones handle the color. Since the numbers of available cones are limited, the sensitivity of the human eye to colors in a moving picture is not very high. Because of this, it is possible to reduce the image bandwidth by reducing the sampling rate of colors as compared to Y.

    4:4:4 sampling

    Here each pixel in the chip is sampled for brightness (Y), Primary color 1 (U) and primary color 2 (V). For a digital signal with 640X 480 pixels (307 KB), the bandwidth would be

    307 KB (Y) + 307 KB (U) + 307 KB (V) = 921 KB

    4:2:2 sampling

    Here each pixel is sampled for Y (640X 480), but only every alternate horizontal pixel is sampled (320 X 480) for the color component. The bandwidth in this case will be

    307 KB (Y) + 154 KB (U) + 154 KB (V) = 615 KB

    This color sampling process is used in JPEG and MPEG compression

    4:2:0 sampling

    Here each pixel is sampled for Y (640X 480), but only every alternate horizontal and vertical pixel is sampled (320 X 240) for color. The bandwidth in this case will be

    307 KB (Y) + 77 KB (U) + 77 KB (V) = 461 KB

    To further reduce the image size, different compression techniques like JPEG, MPEG and Wavelet are used.

    Lens Construction and Chromatic Aberration

    To understand the construction of the lens, it is important to understand the theory of light. The speed of light when traveling through air is roughly 299,460 km per second. When light passes from air into a denser medium at an angle, like glass or water, its speed slows down by the index of refraction of the medium. The following table gives a comparison for the various mediums.

    Medium Index of Refraction Speed of Light

    Air / Vacuum with an index of 1.0 which is 299,460 km/sec
    Water with an index of 1.33 which is 225,158 km/sec
    Glass with an index of 1.5 which is 199,640 km/sec
    Diamond with an index of 2.42 which is 123,744 km/sec

    As the wave of propagation is still continuous, this slowing down bends the light beam when it enters the new medium. It is similar to a bicycle changing direction when it enters sand from road. This basic principle is used in the construction of a lens. Convex and concave lenses are the basic lens types that make the light beam converge and diverge respectively. These basic lens types are mixed and matched to give a wide variety of lenses.

    Chromatic Aberration of Light

    When light is refracted through glass, a lens error called chromatic aberration occurs. What is chromatic aberration? Visible light is made of different colors and each color has a different frequency. These colors will bend differently compared to each other when they pass through a single convex lens, resulting in a scattered focal point, meaning the picture will not be focused properly.

    To overcome this error, several different lenses are grouped together. This can make the lens construction complex and therefore more expensive. There are lenses available that do not resolve the chromatic error accurately and are not compatible for use with color cameras, as they will not give a sharp focus for all the colors in the picture. The same reasoning and logic is applicable for the infrared frequency range also. For this reason, in many cases, when an infrared illuminator is used with a monochrome camera the picture is not properly focused.

    Lens Construction and Quality

    Different Glass Groups in a lens

    Many people are under the impression that a lens is made up of a single lens. This is not true. Besides glass pieces required for correcting chromatic aberration, additional glass is also required:

    • To focus the lens on objects at different distances

    When the lens focus moves from one object to another at a different distance, or when it follows a moving object, the lens elements reposition, i.e. the focal point changes and the picture thus always remain clear. This is not a problem with the human eye which varies the thickness of the lens. A long way to go to catch up with this advanced technology!

    • To achieve different focal lengths in a zoom lens

    The glass pieces move in relation to each other to achieve different magnification of the object, resulting in different focal lengths in a zoom lens.

    Factors effecting lens quality

    During construction, the following factors will determine the quality of the lens.

    1. Number of glass pieces used

    More glass pieces combined together in a lens may help in reducing chromatic error, improving focusing etc, but will increase light absorption, resulting in lesser light availability to the camera. There is a trade off between accuracy and absorption.

    2. Absorption factor of the glass

    Poor quality glass will absorb more light, again resulting in lower light availability to the camera. Obviously glass with lower absorption factor will cost more.

    3. Coating and polishing:

    The quality of coating and polishing of the glass can improve lens quality.

    4. Mechanism:

    Precision and reliability of the mechanism that moves the glass pieces within the lens is important. Poor quality mechanisms can lead to inaccurate settings that may not be consistent.

    Different Elements of a Zoom Lens

    A zoom lens is a lens that can be changed in focal length continuously without losing focus. Magnification of a scene can be changed with a single lens, but every time the position shifts, the lens must be refocused. If two lenses are combined, it is possible to change the magnification without disturbing the focus. A zoom lens is made of the following groups

    1. Focusing lens group:

    The focusing lens group brings an object into focus. It moves irrespective of the zoom ratio or current focal length.

    2. Variator lens group:

    The variator lens group changes the size or magnification of the image

    3. Compensator lens group:

    When moved in relation to the variator group, the compensator lens group corrects the shift in focus.

    Lens groups 1 to 3 are the core of the zoom lens, and are called the zoom unit.

    4. Relay lens:

    Since the zoom unit does not converge light, the relay lens group is placed behind it to focus the object on to the CCD chip.

    Zoom lens design requires extensive optical path tracing and continues self correcting performance evaluation effort. It also involves the use of powerful computers and specialist software.

    Camera Sensitivity / Minimum Scene Illumination

    Sensitivity, measured in lux indicates the minimum light level required to get an acceptable video picture. There is a great deal of confusion in the CCTV industry over this specification. There are two definitions "sensitivity at faceplate" and "minimum scene illumination"

    • Sensitivity at faceplate indicates the minimum light required at the CCD chip to get an acceptable video picture. This looks good on paper, but in reality does not give any indication of the light required at the scene.

    • Minimum scene illumination indicates the minimum light required at the scene to get an acceptable video picture. Though the correct way to show this specification, it depends upon a number of variables. Usually the variables used in the data sheet are never the same as in the field and therefore do not give a correct indication of the actual light required. For example a camera indicating the minimum scene illumination is 0.1 lux. Moon light provides this light level, but when this camera is installed in moon light, the picture quality is either poor or there is no picture. Why does this happen? It is because the field variables are not the same as those used in the data sheet.

  • The Following User Says Thank You to intelliGEORGE For This Useful Post:

    GamerBoy (12-05-09)

  • #3
    Senior Member
    intelliGEORGE's Avatar
    Join Date
    Jan 2008
    Location
    Sydney, AUSTRALIA
    Age
    43
    Posts
    4,106
    Thanks
    884
    Thanked 1,484 Times in 691 Posts
    Rep Power
    479
    Reputation
    7236

    Default

    How does it work?

    Usually light falls on the subject. A certain percentage is absorbed and the balance is reflected and this moves toward the lens in the camera. Depending upon the iris opening of the camera a certain portion of the light falls on the CCD chip. This light then generates a charge, which is converted into a voltage. The following variables should be shown in the data sheet while indicating the minimum scene illumination.

    • Reflectance
    • F Stop
    • Usable Video
    • AGC
    • Shutter speed

    Reflectance

    Light from a light source falls on the subject. Depending upon the surface reflectivity, a certain portion of this light is reflected back which moves towards the camera. Below are a few examples of surface reflectivity.

    • snow = 90%
    • grass = 40%
    • brick = 25%
    • black = 5%

    Most camera manufacturers use an 89% or 75% (white surface) reflectance surface to define the minimum scene illumination. If the actual scene you are watching has the same reflectance as in the data sheet, then there is no problem, but in most cases this is not true. If you are watching a black car, only 5% of the light is reflected and therefore at least 15 times more light is required at the scene to give the same amount of reflected light. To compensate for the mismatch, use the modification factor shown below.

    Modification factor F1 = Rd/Ra
    Rd = reflectance used in the data sheet
    Ra = reflectance of the actual scene

    Lens Speed

    The reflected light starts moving towards the camera. The first device it meets is the lens, which has a certain iris opening. While specifying the minimum scene illumination, the data sheet usually specifies an F Stop of F1.4 or F1.2. F Stop gives an indication of the iris opening of the lens. The larger the F Stop value, the smaller the iris opening and vice versa. If the lens being used at the scene does not have the same iris opening, then the light required at the scene requires to be compensated for the mismatch in the iris opening.

    Modification factor F2=- Fa² / Fd²
    Fa = F-stop of actual lens
    Fd = F-stop of lens used in data sheet.

    Usable Video

    After passing through the lens the light reaches the CCD chip and generates a charge which is proportional to the light falling on a pixel. This charge is read out and converted into a video signal. Usable video is the minimum video signal specified in the camera data sheet to generate an acceptable picture on the monitor. It is usually measured as a percentage of the full video.

    Example: 30% usable video = 30% of 0.7 volts (full video or maximum video amplitude) = 0.2 volts. The question here is: Is this acceptable?

    Unfortunately there is no standard definition for usable video in the industry and most manufacturers do not indicate their definition in the data sheet while measuring the minimum scene illumination.

    It is recommended to be aware of the usable video percentage used by the manufacturer while specifying the minimum scene illumination in the data sheet. The minimum scene illumination should be modified if the usable video used in the data sheet is not acceptable.

    Modification Factor F3 = Ua/Ud
    Ua = actual video required at the site as % of full video
    Ud = usable video % used by the manufacturer

    AGC

    AGC stands for Automatic Gain Control. As the light level reduces the AGC switches on and the video signal gets a boost. Unfortunately, the noise present also gets a boost. However when the light levels are high, the AGC switches off automatically, because the boost could overload the pixels causing vertical streaking etc.

    The data sheet should indicate if the AGC is “On” or “Off” while measuring minimum scene illumination. If the data sheet indicates AGC is "on" yet, if in reality the AGC is "off" then the minimum scene illumination in the data sheet should be modified

    Modification Factor F4 = Ad/Aa
    Ad = AGC position in the data sheet
    Aa = Actual AGC position

    If AGC off = 1, then AGC on = db figure from the data sheet

    Shutter Speed

    These days most cameras have an electronic shutter speed which allows one to adjust the timing of the charge read of the CCD chip. The standard read out is 50 times (PAL) and 60 times (NTSC) per second. If the shutter speed is increased to say 1000 times per sec, that means the light required at the scene should be 20 times more (for PAL). Increasing the shutter speed allows the picture to be crisper, but requires more light. Use the following modification factor.

    Modification Factor F5 = Sa/Sd
    Sd = Default shutter speed (PAL - 1/50 sec NTSC - 1/60 sec)
    Sa = Actual shutter speed being used

    Adjusted Minimum Scene Illumination

    The minimum scene illumination of the camera must be adjusted because of the mismatch between the actual conditions in the field and the variables used in the data sheet.
    Ma = (F1*F2*F3*F4*F5) * Md
    Ma = adjusted minimum scene illumination
    Md = minimum scene illumination as per the camera data sheet

    Comparison

    Compare the actual light at the scene (L) with the adjusted minimum scene illumination (Ma). If the light available is more than the adjusted minimum scene illumination, then the current camera can be used. If the actual light at the scene is lower than the adjusted minimum scene illumination of the camera, then the camera setting may require adjustment or an alternative solution is necessary. The following steps will help resolve the issue.

    Step 1

    Check if camera variables can be changed
    • If AGC is switched off, then switch AGC on
    • Accept a lower usable video %
    • Reduce shutter speed, if possible
    • Use a lens with a lower F-stop

    If no success go step 2

    Step 2

    • Find a more sensitive camera
    • own grade from color to B/W camera
    • Add Infrared light if B/W camera is being used
    • Add more lighting at the scene

    Composite Video Signal

    In CCTV the video signal is called Composite Video. It has a maximum amplitude of 1 volt peak to peak and is made up of the following parts;

    - Video signal
    - Horizontal sync pulse
    - Vertical sync pulse

    Video Signal

    The greater the amount of light on the pixel the larger the amplitude of the video signal is. In a composite video, the maximum amplitude of the video signal is 0.7 volts.

    Vertical Sync Pulses

    A video picture is made up of video frames. In NTSC there are 30 frames or 60 fields per sec, while PAL has 25 frames or 50 fields per sec.

    At the end of each frame or field, a vertical sync pulse is added. This sync pulse tells the electronic devices in the camera and other CCTV component that the field has come to an end and gets them ready to receive the next frame or field. The amplitude of this pulse is a 0.3 volts. This when added to the video signal, gives total amplitude of 1 volt peak to peak.

    Horizontal Sync Pulse

    A video frame is made of lines. In NTSC there are 525 lines per frame, while PAL has 625 lines per frame. Each point in the line reflects the intensity of the video signal. At the end of each line, a horizontal sync pulse is added. This sync pulse tells the electronic devices in the CCTV system that a line has come to an end and to get ready for the start of the next line. This also has amplitude of 0.3 volts.

    The above is a quick overview of the components of a composite video. Below are some statistics and additional information about a video signal.

    Horizontal and Vertical Scanning Frequencies

    The following details the different frequencies under the NTSC and PAL system;

    Frame Frequency: 30 per sec (NTSC) 25 per sec (PAL)
    Duration of each frame: 1 / 30 sec (NTSC) 1/25 sec (PAL)
    No of fields per frame: 2 (NTSC & PAL)
    Field frequency: 60 per sec (NTSC) 50 per sec (PAL)
    Duration of each field: 1 / 60 sec (NTSC) 1/50 sec (PAL)
    No of lines per frame: 525 (NTSC) 625 (PAL)
    No of lines per field: 262.5 (NTSC) 312.5 (PAL)
    No of lines per sec: 525 X 30 =15750 (NTSC) 625 X 25 = 15625 (PAL)
    Duration of each line: 1 / 15750 sec or 63.5us (NTSC) 1 / 15625 sec or 64us (PAL)

    Horizontal and Vertical Blanking

    Retrace or fly back is the time required to move from the end of one line to the start of the next line or from the end of one field to the start of the next field. No picture information is scanned during the retrace and therefore must be blanked out. In television, blanking means going to black level.
    The retrace must be very rapid, since it is wasted time in terms of picture information. The time needed for horizontal blanking is approximately 16% of each horizontal line. The time for the vertical blanking is approximately 8% of the vertical field

    Field duration 30 per sec (NTSC) 25 per sec (PAL)
    Vertical blanking 1 / 30 sec (NTSC) 1/25 sec (PAL)
    Line Loss due vertical blanking 2
    Line duration 60 per sec (NTSC) 50 per sec (PAL)
    Horizontal blanking 1 / 60 sec (NTSC) 1/50 sec (PAL)

    Horizontal and Vertical Synchronization

    The blanking pulse puts the video signal at the black level, the synchronization pulse starts the actual retrace in scanning. Each horizontal sync pulse is inserted in the video signal within the time of the horizontal blanking pulse and each vertical sync pulse is inserted in the video signal within the time of the vertical blanking time. The following is the frequency of each synchronization pulse.

    Vertical 60 Hz (NTSC) 50 Hz (PAL)
    Horizontal 15750 Hz (NTSC) 15625 Hz (PAL)

  • The Following User Says Thank You to intelliGEORGE For This Useful Post:

    GamerBoy (12-05-09)

  • #4
    Senior Member
    intelliGEORGE's Avatar
    Join Date
    Jan 2008
    Location
    Sydney, AUSTRALIA
    Age
    43
    Posts
    4,106
    Thanks
    884
    Thanked 1,484 Times in 691 Posts
    Rep Power
    479
    Reputation
    7236

    Default

    The Colour Signal

    A color video signal is the same as monochrome except that the color information in the scene is also included, which is transmitted separately. The following two signals are transmitted separately

    Luminance signal:

    Known as the Y signal, contains the variations in the picture information as in a monochrome signal and is used to reproduce the picture in black and white.

    Chrominance signal:

    Known as the C signal, contains the color information. It is transmitted as the modulation on a sub carrier. The sub carrier frequency is 3.58 MHz for NTSC and 4.43 MHz for PAL.


    Construction of the Composite Video Signal

    The composite video has the following parts:

    - Camera signal output corresponding to the variation of light in the scene
    - The sync pulses to synchronize the scanning
    - The blanking pulses to make the retrace invisible

    For color signals, the chrominance signal and color sync burst are added.

    Key Consideration for Effective 24 hours CCTV

    When designing CCTV systems for effective 24-hour surveillance, there are particular areas, which must be addressed regarding the night time performance of the system;

    - Camera
    - Lens
    - Illumination

    1. Camera

    All cameras are not the same, and some are better suited to providing effective coverage at night. It can be a minefield for installers with impressive claims of zero/low lux use cameras, but in essence without light there can be no picture. All CCD cameras offer some degree of IR response though some have enhanced I.R. performance which make them more suitable for longer range applications or for use with low power IR sources such as LED`s.

    Until recently the most IR sensitive cameras were based on frame transfer chips. Recently some new chip sets have become available mainly in ½ formats.

    These offer excellent low noise, high resolution, and low smear characteristics together with excellent IR response. They also overcome some of the draw backs of the frame transfer cameras.

    Some cameras offer integration as a method of improving night-time performance - multiplying the light available by several factors. However the application of this technology may be limited to more fixed or static situations with limited movement on scene because of jerkiness caused by the integration.

    Several dual mode cameras (day-night, dual technology) have been launched over the last few years. These are intended to provide the best compromise for 24 hour surveillance - colour by day and monochrome / IR sensitive by night.

    These are different forms of dual modes with some incorporating optical filters, which are moved over the CCD sensor for daytime/colour operation - and removed during night-time/monochrome to maximise the low light sensitivity. Other camera designs incorporate specialised filters, which show both good colour performance and IR sensitivity.
    The key elements to consider when choosing your camera are:

    - Sensitivity - low light performance
    - Signal to noise ratio - a good s/n ratio will provide "clean" pictures
    - Spectral response - the ability of the camera to see IR wavelengths

    2. Lens

    The night-time performance of lenses is sometimes overlooked. There is a compromise to be made - at night you want to maximise the light gathering capability of your lens (i.e. have the smallest `f` stop). This will reduce the depth of field of the picture, which may cause focusing problems. This obviously is less of a problem with auto iris lenses where the lens will
    naturally open to its maximum aperture (lowest `f` stop) in low light operation but on the fixed lens there may need to be a compromise between the low light operation and its depth of field focusing.

    Focus Shift

    Daylight and IR light have different focal lengths/points.

    This may cause a focus shift between day time and IR operation. The degree of focus shift may depend on a variety of factors depending on the quality of lenses, the wavelength of the IR filter (830nM & 950nM will give a more exaggerated focus shift).

    However, more recently some manufacturers have developed a range of lenses with zero focus shift between daytime and IR performance.

    3. Illumination

    The key for a successful night-time scheme is having sufficient light, the right quality of light and the right control over the light.

    The best night-time solution for CCTV is Infra-Red lighting at the camera head controlled by either telemetry or photocell.

    Key Design Consideration

    730nM filters are brighter in appearance than 830nm or 950nm but provide more usable Infra-Red radiation for cameras. In certain applications the red appearance of 730nm filters may provide an additional deterrent to 830nm & 950nm applications. When using 830nm or 950nm filters, ensure IR enhanced cameras are used for maximum performance.

    Match the field of view of camera/lens with lens on Infra-Red lamp.

    An even illumination is needed to allow a CCTV camera to work within its dynamic range.
    __________________

  • The Following User Says Thank You to intelliGEORGE For This Useful Post:

    GamerBoy (12-05-09)

  • #5
    Senior Member
    intelliGEORGE's Avatar
    Join Date
    Jan 2008
    Location
    Sydney, AUSTRALIA
    Age
    43
    Posts
    4,106
    Thanks
    884
    Thanked 1,484 Times in 691 Posts
    Rep Power
    479
    Reputation
    7236

    Default

    Sorry it is split into multiple posts, they have a 10,000 character limit imposed on the forum now.


  • Similar Threads

    1. The Wii: The Basics.
      By RHCP in forum Nintendo
      Replies: 16
      Last Post: 25-11-08, 04:16 PM
    2. Useful CCTV tools
      By intelliGEORGE in forum CCTV
      Replies: 12
      Last Post: 31-01-08, 05:11 PM
    3. Right Lens For CCTV Applications
      By intelliGEORGE in forum CCTV
      Replies: 0
      Last Post: 16-01-08, 09:12 AM

    Bookmarks

    Posting Permissions

    • You may not post new threads
    • You may not post replies
    • You may not post attachments
    • You may not edit your posts
    •