Archive for October, 2009

BroadCast Engineering

Wednesday, October 21st, 2009

 

Broadcast engineering is the field of electrical engineering, and now to some extent computer engineering and information technology, which deals with radio and television broadcasting. Audio engineering and RF engineering are also essential parts of broadcast engineering, being their own subsets of electrical engineering.

Broadcast engineering involves both the studio end and the transmitter end (the entire airchain), as well as remote broadcasts. Every station has a broadcast engineer, though one may now serve an entire station group in a city, or be a contract engineer who essentially freelances his services to several stations (often in small media markets) as needed.

 Duties

Modern duties of a broadcast engineer include maintaining broadcast automation systems for the studio and automatic transmission systems for the transmitter plant. There are also important duties regarding radio towers, which must be maintained with proper lighting and painting. Occasionally a station’s engineer must deal with complaints of RF interference, particularly after a station has made changes to its transmission facilities.

Titles

Broadcast engineers may have varying titles depending on their level of expertise and field specialty. Some widely used titles include:

  • Broadcast design engineer
  • Broadcast systems engineer
  • Broadcast IT engineer
  • Broadcast network engineer
  • Broadcast maintenance engineer
  • Video broadcast engineer
  • TV studio broadcast engineer
  • Outside broadcast engineer
  • Remote broadcast engineer

Qualifications

Broadcast engineers may need to possess some or all of the following degrees, depending on the broadcast technical environment. If one of the formal qualifications is not present, a related degree or equivalent professional experience is desirable.

  • Degree in electrical engineering
  • Degree in electronic engineering
  • Degree in telecommunications engineering
  • Degree in computer engineering
  • Degree in management information system
  • Degree in broadcast technology

Knowledge

Broadcast engineers are generally required to have knowledge in the following areas, from conventional video broadcast systems to modern Information Technology:

  • Conventional broadcast
    • Audio/Video instrumentation measurment
    • Baseband video – standard / high-definition
    • Broadcast studio acoustics
    • Television studios – broadcast video cameras and camera lenses
    • Production switchers (video mixer)
    • Audio mixer
  • Broadcast IT
    • Video compression – DV25, MPEG, DVB or ATSC (or ISDB)
    • Digital server playout technologies. – VDCP, Louth, Harris, control protocols
    • Broadcast automation
    • Disk storage – RAID / NAS / SAN technologies.
    • Archives – Tape archives or grid storage technologies.
    • Computer networking
    • Operating systems – Microsoft Windows / Mac OS / Linux / RTOS
    • Post production – video capture and non-linear editing systems (NLEs).
  • RF
    • RF satellite uplinking – High powered amplifiers (HPA)
    • RF communications satellite downlinking – Band detection, carrier detection and IRD tuning, etc.
    • RF transmitter maintenance – IOT UHF transmitters, Solid State VHF transmitters, antennas, transmission line, high power filters, digital modulators.
  • Health & safety
    • Occupational safety and health
    • Fire suppression systems like FM 200.
    • Basic structural engineering
    • RF hazard mitigation

Above mentioned requirements vary from station to station.

Skills

Broadcast engineers must also have skillset and methodology to problem solving and soft skills, that helps in making effective use of their knowledge base.

  • Self-motivated
  • Enthusiasm to learn about emerging technologies, hardware/software and applications.
  • Logical approach to problem solving and troubleshooting
  • Detail oriented.
  • Quick thinking
  • Calm under high-pressure situations
  • Good oral and written business communications, negotiation and time management skills
  • Leadership skills – Organizing and motivating a group of engineers
  • Drawing skills – To draw graphical Visio workflow diagrams or CAD schematic drawings
  • Training and mentoring skills – To train and mentor junior or fellow engineers or operational staff.

 

# For MORE, E-mail to info@makcissolutions.com .

RF…!!!

Wednesday, October 21st, 2009

RF Engineering, also known as Radio Frequency Engineering, is a subset of electrical engineering that deals with devices which are designed to operate in the Radio Frequency spectrum. These devices operate within the range of about 3 Hz up to 300 GHz.

RF Engineering is incorporated into almost everything that transmits or receives a radio wave which includes, but not limited to, Mobile Phones, Radios, WiFi and walkie talkies.

RF Engineering is a highly specialized field. To produce quality results, an in-depth knowledge of Mathematics, Physics and general electronics theory is required. Even with this, the initial design of an RF Circuit usually bears very little resemblance to the final physical circuit produced, as revisions to the design are often required to achieve intended results.

Duties

RF Engineers are specialists in their respective field and can take on many different roles, such as design, and maintenance. An RF Engineer at a broadcast facility is responsible for maintenance of the stations high power broadcast transmitters, and associated systems. This includes transmitter site emergency power, remote control, main transmission line and antenna adjustments, microwave radio relay STL/TSL links and more. Typically, transmission equipment is past its expected lifetime, and there is little support available from the manufacturer. Often, creative and collaborative solutions are required. The range of technologies used is vast due to the wide array of frequencies allocated for different radio services, and due to the range in age of equipment. In general, older equipment is easier to service.

 

 # For MORE, E-mail to info@makcissolutions.com .

Computer Software ?

Wednesday, October 21st, 2009

Computer software, or just software is a general term used to describe the role that computer programs, procedures and documentation play in a computer system.

The term includes:

  • Application software, such as word processors which perform productive tasks for users.
  • Firmware, which is software programmed resident to electrically programmable memory devices on board mainboards or other types of integrated hardware carriers.
  • Middleware, which controls and co-ordinates distributed systems.
  • System software such as operating systems, which interface with hardware to provide the necessary services for application software.
  • Software testing is a domain dependent of development and programming. Software testing consists of various methods to test and declare a software product fit before it can be launched for use by either an individual or a group.
  • Testware, which is an umbrella term or container term for all utilities and application software that serve in combination for testing a software package but not necessarily may optionally contribute to operational purposes. As such, testware is not a standing configuration but merely a working environment for application software or subsets thereof.

Software includes things such as websites, programs or video games, that are coded by programming languages like C or C++ .

“Software” is sometimes used in a broader context to mean anything which is not hardware but which is used with hardware, such as film, tapes and records.

Overview 

Computer software is often regarded as anything but hardware, meaning that the “hard” are the parts that are tangible while the “soft” part is the intangible objects inside the computer. Software encompasses an extremely wide array of products and technologies developed using different techniques like programming languages, scripting languages, microcode, or an FPGA configuration. The types of software include web pages developed by technologies like HTML, PHP, Perl, JSP, ASP.NET, XML, and desktop applications like OpenOffice, Microsoft Word developed by technologies like C, C++, Java,or C#. Software usually runs on an underlying software operating systems such as the Linux or Microsoft Windows. Software also includes video games and the logic systems of modern consumer devices such as automobiles, televisions, and toasters.

Relationship to computer hardware

Computer software is so called to distinguish it from computer hardware, which encompasses the physical interconnections and devices required to store and execute (or run) the software. At the lowest level, software consists of a machine language specific to an individual processor. A machine language consists of groups of binary values signifying processor instructions that change the state of the computer from its preceding state. Software is an ordered sequence of instructions for changing the state of the computer hardware in a particular sequence. It is usually written in high-level programming languages that are easier and more efficient for humans to use (closer to natural language) than machine language. High-level languages are compiled or interpreted into machine language object code. Software may also be written in an assembly language, essentially, a mnemonic representation of a machine language using a natural language alphabet. Assembly language must be assembled into object code via an assembler.

The term “software” was first used in this sense by John W. Tukey in 1958. In computer science and software engineering, computer software is all computer programs. The theory that is the basis for most modern software was first proposed by Alan Turing in his 1935 essay Computable numbers with an application to the Entscheidungsproblem.

Types of software

Practical computer systems divide software systems into three major classes: system software, programming software and application software, although the distinction is arbitrary, and often blurred.

System software

System software helps run the computer hardware and computer system. It includes a combination of the following:

  • device drivers
  • operating systems
  • servers
  • utilities
  • windowing systems

The purpose of systems software is to unburden the applications programmer from the often complex details of the particular computer being used, including such accessories as communications devices, printers, device readers, displays and keyboards, and also to partition the computer’s resources such as memory and processor time in a safe and stable manner. Examples are- Windows XP, Linux and Mac.

Programming software

Programming software usually provides tools to assist a programmer in writing computer programs, and software using different programming languages in a more convenient way. The tools include:

  • compilers
  • debuggers
  • interpreters
  • linkers
  • text editors

An Integrated development environment (IDE) is a single application that attempts to manage all these functions.

Application software

Application software allows end users to accomplish one or more specific (not directly computer development related) tasks.

Typical applications include:

  • industrial automation
  • business software
  • computer games
  • quantum chemistry and solid state physics software
  • telecommunications (i.e., the internet and everything that flows on it)
  • databases
  • educational software
  • medical software
  • military software
  • molecular modeling software
  • image editing
  • spreadsheet
  • Word processing
  • Decision making software

Software topics

 

Software Architecture

Users often see things differently than programmers. People who use modern general purpose computers (as opposed to embedded systems, analog computers and supercomputers) usually see three layers of software performing a variety of tasks: platform, application, and user software.

  • Platform software: Platform includes the firmware, device drivers, an operating system, and typically a graphical user interface which, in total, allow a user to interact with the computer and its peripherals (associated equipment). Platform software often comes bundled with the computer. On a PC you will usually have the ability to change the platform software.
  • Application software: Application software or Applications are what most people think of when they think of software. Typical examples include office suites and video games. Application software is often purchased separately from computer hardware. Sometimes applications are bundled with the computer, but that does not change the fact that they run as independent applications. Applications are usually independent programs from the operating system, though they are often tailored for specific platforms. Most users think of compilers, databases, and other “system software” as applications.
  • User-written software: End-user development tailors systems to meet users’ specific needs. User software include spreadsheet templates, word processor [Platform software: Platform includes the firmware, device drivers, an operating system, and typically a graphical user interface which, in total, allow a user to interact with the computer and its peripherals (associated equipment). Platform software often comes bundled with the computer. On a PC you will usually have the ability to change the platform software. Even email filters are a kind of user software. Users create this software themselves and often overlook how important it is. Depending on how competently the user-written software has been integrated into default application packages, many users may not be aware of the distinction between the original packages, and what has been added by co-workers.

Software Documentation

Most software has software documentation so that the end user can understand the program, what it does, and how to use it. Without a clear documentation, software can be hard to use—especially if it is a very specialized and relatively complex software like the Photoshop or AutoCAD.

Developer documentation may also exist, either with the code as comments and/or as separate files, detailing how the programs works and can be modified.

Software Library

An executable is almost always not sufficiently complete for direct execution. Software libraries include collections of functions and functionality that may be embedded in other applications. Operating systems include many standard Software libraries, and applications are often distributed with their own libraries.File:Software.jpg

Software Standard

Since software can be designed using many different programming languages and in many different operating systems and operating environments, software standard is needed so that different software can understand and exchange information between each other. For instance, an email sent from a Microsoft Outlook should be readable from Yahoo! Mail and vice versa.

Execution (computing)

Computer software has to be "loaded" into the computer's storage (such as a [hard drive], memory, or RAM). Once the software has loaded, the computer is able to execute the software. This involves passing instructions from the application software, through the system software, to the hardware which ultimately receives the instruction as machine code. Each instruction causes the computer to carry out an operation – moving data, carrying out a computation, or altering the control flow of instructions.

Data movement is typically from one place in memory to another. Sometimes it involves moving data between memory and registers which enable high-speed data access in the CPU. Moving data, especially large amounts of it, can be costly. So, this is sometimes avoided by using “pointers” to data instead. Computations include simple operations such as incrementing the value of a variable data element. More complex computations may involve many operations and data elements together.

Quality and reliability

Software quality, Software testing, and Software reliability

Software quality is very important, especially for commercial and system software like Microsoft Office, Microsoft Windows and Linux. If software is faulty (buggy), it can delete a person’s work, crash the computer and do other unexpected things. Faults and errors are called “bugs.” Many bugs are discovered and eliminated (debugged) through software testing. However, software testing rarely – if ever – eliminates every bug; some programmers say that “every program has at least one more bug” (Lubarsky’s Law). All major software companies, such as Microsoft, Novell and Sun Microsystems, have their own software testing departments with the specific goal of just testing. Software can be tested through unit testing, regression testing and other methods, which are done manually, or most commonly, automatically, since the amount of code to be tested can be quite large. For instance, NASA  has extremely rigorous software testing procedures for many operating systems and communication functions. Many NASA based operations interact and identify each other through command programs called software. This enables many people who work at NASA to check and evaluate functional systems overall. Programs containing command software enable hardware engineering and system operations to function much easier together.

Software license

The software’s license gives the user the right to use the software in the licensed environment. Some software comes with the license when purchased off the shelf, or an OEM license when bundled with hardware. Other software comes with a free software license, granting the recipient the rights to modify and redistribute the software. Software can also be in the form of freeware or shareware.

Patents

Software patent and Software patent debate

Software can be patented; however, software patents can be controversial in the software industry with many people holding different views about it. The controversy over software patents is that a specific algorithm or technique that the software has may not be duplicated by others and is considered an intellectual property and copyright infringement depending on the severity. Some people believe that software patent hinder software development, while others argue that software patents provide an important incentive to spur software innovation.

Design and implementation

Design and implementation of software varies depending on the complexity of the software. For instance, design and creation of Microsoft Word software will take much longer time than designing and developing Microsoft Notepad because of the difference in functionalities in each one.

Software is usually designed and created (coded/written/programmed) in integrated development environments (IDE) like Eclipse, Emacs and Microsoft Visual Studio that can simplify the process and compile the program. As noted in different section, software is usually created on top of existing software and the application programming interface (API) that the underlying software provides like GTK+, JavaBeans or Swing. Libraries (APIs) are categorized for different purposes. For instance, JavaBeans library is used for designing enterprise applications, Windows Forms library is used for designing graphical user interface (GUI) applications like Microsoft Word, and Windows Communication Foundation is used for designing web services. Underlying computer programming concepts like quicksort, hashtable, array, and binary tree can be useful to creating software. When a program is designed, it relies on the API. For instance, if a user is designing a Microsoft Windows desktop application, he/she might use the .NET Windows Forms library to design the desktop application and call its APIs like Form1.Close() and Form1.Show() to close or open the application and write the additional operations him/herself that it need to have. Without these APIs, the programmer needs to write these APIs him/herself. Companies like Sun Microsystems, Novell, and Microsoft provide their own APIs so that many applications are written using their software libraries that usually have numerous APIs in them.

Software has special economic characteristics that make its design, creation, and distribution different from most other economic goods. A person who creates software is called a programmer, software engineer, software developer, or code monkey, terms that all essentially have a same meaning.

Industry and organizations

Software has its own niche industry that is called the software industry made up of different entities and peoples that produce software, and as a result there are many software companies and programmers in the world. Because software is increasingly used in many different areas like in finance, searching, mathematics, space exploration, gaming and mining and such, software companies and people usually specialize in certain areas. For instance, Electronic Arts primarily creates video games.

Also selling software can be quite a profitable industry. For instance, Bill Gates, the founder of Microsoft is the second richest man in the world in 2008 largely by selling the Microsoft Windows and Microsoft Office software programs. The same goes for Larry Ellison, largely through his Oracle database software.There are also many non-profit software organizations like the Free Software Foundation, GNU Project, Mozilla Foundation. Also there are many software standard organizations like the W3C, IETF and others that try to come up with a software standard so that many software can work and interoperate with each other like through standards such as XML, HTML, HTTP or FTP.

Some of the well known software companies include Microsoft, Oracle, Novell, SAP, Adobe Systems, and Corel.

 

# For MORE, Email to info@makcissolutions.com .

Introduction of ELECTRONICS

Wednesday, October 21st, 2009

Electronics is a branch of science and technology that deals with the controlled flow of electrons. The ability to control electron flow is usually applied to information handling or device control. Electronics is distinct from electrical science and technology, which deals with the generation, distribution, control and application of electrical power. This distinction started around 1906 with the invention by Lee De Forest of the triode, which made electrical amplification possible with a non-mechanical device. Until 1950 this field was called “radio technology” because its principal application was the design and theory of radio transmitters, receivers and vacuum tubes.

Most electronic devices today use semiconductor components to perform electron control. The study of semiconductor devices and related technology is considered a branch of physics, whereas the design and construction of electronic circuits to solve practical problems come under electronics engineering. This article focuses on engineering aspects of electronics.  

Electronic devices and components

Electronic component

An electronic component is any physical entity in an electronic system used to affect the electrons or their associated fields in a desired manner consistent with the intended function of the electronic system. Components are generally intended to be connected together, usually by being soldered to a printed circuit board (PCB), to create an electronic circuit with a particular function (for example an amplifier, radio receiver, or oscillator). Components may be packaged singly or in more complex groups as integrated circuits. Some common electronic components are capacitors, resistors, diodes, transistors, etc.

Types of circuits

Circuits and components can be divided into two groups: analog and digital. A particular device may consist of circuitry that has one or the other or a mix of the two types

1.      Analog circuits

Most analog electronic appliances, such as radio receivers, are constructed from combinations of a few types of basic circuits. Analog circuits use a continuous range of voltage as opposed to discrete levels as in digital circuits.

The number of different analog circuits so far devised is huge, especially because a ‘circuit’ can be defined as anything from a single component, to systems containing thousands of components.

Analog circuits are sometimes called linear circuits although many non-linear effects are used in analog circuits such as mixers, modulators, etc. Good examples of analog circuits include vacuum tube and transistor amplifiers, operational amplifiers and oscillators.

One rarely finds modern circuits that are entirely analog. These days analog circuitry may use digital or even microprocessor techniques to improve performance. This type of circuit is usually called “mixed signal” rather than analog or digital.

Sometimes it may be difficult to differentiate between analog and digital circuits as they have elements of both linear and non-linear operation. An example is the comparator which takes in a continuous range of voltage but only outputs one of two levels as in a digital circuit. Similarly, an overdriven transistor amplifier can take on the characteristics of a controlled switch having essentially two levels of output.

2.     Digital circuits

Digital circuits are electric circuits based on a number of discrete voltage levels. Digital circuits are the most common physical representation of Boolean algebra and are the basis of all digital computers. To most engineers, the terms “digital circuit”, “digital system” and “logic” are interchangeable in the context of digital circuits. Most digital circuits use two voltage levels labeled “Low”(0) and “High”(1). Often “Low” will be near zero volts and “High” will be at a higher level depending on the supply voltage in use. Ternary (with three states) logic has been studied, and some prototype computers made.

Computers, electronic clocks, and programmable logic controllers (used to control industrial processes) are constructed of digital circuits. Digital Signal Processors are another example.

Building-blocks:

  • Logic gates
  • Adders
  • Binary Multipliers
  • Flip-Flops
  • Counters
  • Registers
  • Multiplexers
  • Schmitt triggers

Highly integrated devices:

  • Microprocessors
  • Microcontrollers
  • Application-specific integrated circuit (ASIC)
  • Digital signal processor (DSP)
  • Field-programmable gate array (FPGA)

Heat dissipation and thermal management

Thermal management of electronic devices and systems

Heat generated by electronic circuitry must be dissipated to prevent immediate failure and improve long term reliability. Techniques for heat dissipation can include heat sinks and fans for air cooling, and other forms of computer cooling such as water cooling. These techniques use convection, conduction, & radiation of heat energy.

Noise

Electronic noise

Noise is associated with all electronic circuits. Noise is defined as unwanted disturbances superposed on a useful signal that tend to obscure its information content. Noise is not the same as signal distortion caused by a circuit. Noise may be electromagnetically or thermally generated, which can be decreased by lowering the operating temperature of the circuit. Other types of noise, such as shot noise cannot be removed as they are due to limitations in physical properties.

Electronics theory

Mathematical methods in electronics

Mathematical methods are integral to the study of electronics. To become proficient in electronics it is also necessary to become proficient in the mathematics of circuit analysis.

Circuit analysis is the study of methods of solving generally linear systems for unknown variables such as the voltage at a certain node or the current through a certain branch of a network. A common analytical tool for this is the SPICE circuit simulator.

Also important to electronics is the study and understanding of electromagnetic field theory.

Computer aided design (CAD)

Electronic design automation

Today’s electronics engineers have the ability to design circuits using premanufactured building blocks such as power supplies, semiconductors (such as transistors), and integrated circuits. Electronic design automation software programs include schematic capture programs and printed circuit board design programs. Popular names in the EDA software world are NI Multisim, Cadence (ORCAD), Eagle PCB and Schematic, Mentor (PADS PCB and LOGIC Schematic), Altium (Protel), LabCentre Electronics (Proteus) and many others.

Construction methods

Electronic packaging

Many different methods of connecting components have been used over the years. For instance, early electronics often used point to point wiring with components attached to wooden breadboards to construct circuits. Cordwood construction and wire wraps were other methods used. Most modern day electronics now use printed circuit boards made of materials such as FR4, or the cheaper (and less hard-wearing) Synthetic Resin Bonded Paper (SRBP, also known as Paxoline/Paxolin (trade marks) and FR2) – characterised by its light yellow-to-brown colour.

 

# For MORE, E-mail to info@makcissolutions.com .

Electronic Engineering

Wednesday, October 21st, 2009

Electronics engineering, also referred to as electronic engineering is an engineering discipline which uses the scientific knowledge of the behavior and effects of electrons to develop components, devices, systems, or equipment (as in electron tubes, transistors, integrated circuits, and printed circuit boards) that uses electricity as part of its driving force. Both terms denote a broad engineering field that encompasses many subfields including those that deal with power, instrumentation engineering, telecommunications, semiconductor circuit design, and many others.

The term also covers a large part of electrical engineering degree courses as studied at most European universities. In the U.S., however, electrical engineering encompasses all electrical disciplines including electronics. The Institute of Electrical and Electronic Engineers is one of the most important and influential organizations for electronic engineers. Indian universities have separate departments for Electronics Engineering.

Terminology

The name electrical engineering is still used to cover electronic engineering amongst some of the older (notably American and Australian) universities and graduates there are called electrical engineers. Some people believe the term ‘electrical engineer’ should be reserved for those having specialized in power and heavy current or high voltage engineering, while others believe that power is just one subset of electrical engineering (and indeed the term ‘power engineering’ is used in that industry) as well as ‘electrical distribution engineering’. Again, in recent years there has been a growth of new separate-entry degree courses such as ‘information engineering’ and ‘communication systems engineering’, often followed by academic departments of similar name. Most European universities now refer to electrical engineering as power engineers and make a distinction between Electrical and Electronics Engineering. Beginning in the 1980s, the term computer engineer was often used to refer to electronic or information engineers. However, Computer Engineering is now considered a subset of Electronics Engineering and the term is now becoming archaic.

History of electronic engineering

Electronic engineering as a profession sprang from technological improvements in the telegraph industry in the late 1800s and the radio and the telephone industries in the early 1900s. People were attracted to radio by the technical fascination it inspired, first in receiving and then in transmitting. Many who went into broadcasting in the 1920s were only ‘amateurs’ in the period before World War I.

The modern discipline of electronic engineering was to a large extent born out of telephone, radio, and television equipment development and the large amount of electronic systems development during World War II of radar, sonar, communication systems, and advanced munitions and weapon systems. In the interwar years, the subject was known as radio engineering and it was only in the late 1950s that the term electronic engineering started to emerge.

The electronic laboratories (Bell Labs in the United States for instance) created and subsidized by large corporations in the industries of radio, television, and telephone equipment began churning out a series of electronic advances. In 1948, came the transistor and in 1960, the IC to revolutionize the electronic industry. In the UK, the subject of electronic engineering became distinct from electrical engineering as a university degree subject around 1960. Before this time, students of electronics and related subjects like radio and telecommunications had to enroll in the electrical  engineering department of the university as no university had departments of electronics. Electrical engineering was the nearest subject with which electronic engineering could be aligned, although the similarities in subjects covered (except mathematics and electromagnetism) lasted only for the first year of the three-year course.

Early electronics

In 1893, Nikola Tesla made the first public demonstration of radio communication. Addressing the Franklin Institute in Philadelphia and the National Electric Light Association, he described and demonstrated in detail the principles of radio communication. In 1896, Guglielmo Marconi went on to develop a practical and widely used radio system. In 1904, John Ambrose Fleming, the first professor of electrical Engineering at University College London, invented the first radio tube, the diode. One year later, in 1906, Robert von Lieben and Lee De Forest independently developed the amplifier tube, called the triode.

Electronics is often considered to have begun when Lee De Forest invented the vacuum tube in 1907. Within 10 years, his device was used in radio transmitters and receivers as well as systems for long distance telephone calls. In 1912, Edwin H. Armstrong invented the regenerative feedback amplifier and oscillator; he also invented the superheterodyne radio receiver and could be considered the father of modern radio.  Vacuum tubes remained the preferred amplifying device for 40 years, until researchers working for William Shockley at Bell Labs invented the transistor in 1947. In the following years, transistors made small portable radios, or transistor radios, possible as well as allowing more powerful mainframe computers to be built. Transistors were smaller and required lower voltages than vacuum tubes to work. In the interwar years the subject of electronics was dominated by the worldwide interest in radio and to some extent telephone and telegraph communications. The terms ‘wireless’ and ‘radio’ were then used to refer to anything electronic. There were indeed few non-military applications of electronics beyond radio at that time until the advent of television. The subject was not even offered as a separate university degree subject until about 1960.

Prior to World War II, the subject was commonly known as ‘radio engineering’ and basically was restricted to aspects of communications and RADAR, commercial radio and early television. At this time, study of radio engineering at universities could only be undertaken as part of a physics degree. Later, in post war years, as consumer devices began to be developed, the field broadened to include modern TV, audio systems, Hi-Fi and latterly computers and microprocessors. In the mid to late 1950s, the term radio engineering gradually gave way to the name electronic engineering, which then became a stand alone university degree subject, usually taught alongside electrical engineering with which it had become associated due to some similarities.

Before the invention of the integrated circuit in 1959, electronic circuits were constructed from discrete components that could be manipulated by hand. These non-integrated circuits consumed much space and power, were prone to failure and were limited in speed although they are still common in simple applications. By contrast, integrated circuits packed a large number — often millions — of tiny electrical components, mainly transistors, into a small chip around the size of a coin.

Tubes or valves

The vacuum tube detector

The invention of the triode amplifier, generator, and detector made audio communication by radio practical. (Reginald Fessenden’s 1906 transmissions used an electro-mechanical alternator.) The first known radio news program was broadcast 31 August 1920 by station 8MK, the unlicensed predecessor of WWJ (AM) in Detroit, Michigan. Regular wireless broadcasts for entertainment commenced in 1922 from the Marconi Research Centre at Writtle near Chelmsford, England.

While some early radios used some type of amplification through electric current or battery, through the mid 1920s the most common type of receiver was the crystal set. In the 1920s, amplifying vacuum tubes revolutionized both radio receivers and transmitters.

Television

In 1928 Philo Farnsworth made the first public demonstration of a purely electronic television. During the 1930s several countries began broadcasting, and after World War II it spread to millions of receivers, eventually worldwide. Ever since then, electronics have been fully present in television devices.

Modern televisions and video displays have evolved from bulky electron tube technology to use more compact devices, such as plasma and LCD displays. The trend is for even lower power devices such as the organic light-emitting diode displays, and it is most likely to replace the LCD and plasma technologies.

Radar and radio location

Du ring World War II many efforts were expended in the electronic location of enemy targets and aircraft. These included radio beam guidance of bombers, electronic counter measures, early radar systems etc. During this time very little if any effort was expended on consumer electronics developments.

Computers

History of computing hardware

In 1941, Konrad Zuse presented the Z3, the world’s first functional computer. After the Colossus computer in 1943, the ENIAC (Electronic Numerical Integrator and Computer) of John Presper Eckert and John Mauchly followed in 1946, beginning the computing era. The arithmetic performance of these machines allowed engineers to develop completely new technologies and achieve new objectives. Early examples include the Apollo missions and the NASA moon landing.

Transistors

The invention of the transistor in 1947 by William B. Shockley, John Bardeen and Walter Brattain opened the door for more compact devices and led to the development of the integrated circuit in 1959 by Jack Kilby.

Microprocessors

In 1969, Ted Hoff conceived the commercial microprocessor at Intel and thus ignited the development of the personal computer. Hoff’s invention was part of an order by a Japanese company for a desktop programmable electronic calculator, which Hoff wanted to build as cheaply as possible. The first realization of the microprocessor was the Intel 4004, a 4-bit processor, in 1969, but only in 1973 did the Intel 8080, an 8-bit processor, make the building of the first personal computer, the MITS Altair 8800, possible. The first PC was announced to the general public on the cover of the January 1975 issue of Popular Electronics. Mechatronics would have a good fortune in the near future.

##  In the field of electronic engineering, engineers design and test circuits that use the electromagnetic properties of electrical components such as resistors, capacitors, inductors, diodes and transistors to achieve a particular functionality. The tuner circuit, which allows the user of a radio to filter out all but a single station, is just one example of such a circuit.

In designing an integrated circuit, electronics engineers first construct circuit schematics that specify the electrical components and describe the interconnections between them. When completed, VLSI engineers convert the schematics into actual layouts, which map the layers of various conductor and semiconductor materials needed to construct the circuit. The conversion from schematics to layouts can be done by software but very often requires human fine-tuning to decrease space and power consumption. Once the layout is complete, it can be sent to a fabrication plant for manufacturing.

Integrated circuits and other electrical components can then be assembled on printed circuit boards to form more complicated circuits. Today, printed circuit boards are found in most electronic devices including televisions, computers and audio players.

Typical electronic engineering undergraduate syllabus

Apart from electromagnetics and network theory, other items in the syllabus are particular to electronics engineering course. Electrical engineering courses have other specialisms such as machines, power generation and distribution. Note that the following list does not include the extensive engineering mathematics curriculum that is a prerequisite to a degree.

Electromagnetics

Elements of vector calculus: divergence and curl; Gauss’ and Stokes’ theorems, Maxwell’s equations: differential and integral forms. Wave equation, Poynting vector. Plane waves: propagation through various media; reflection and refraction; phase and group velocity; skin depth. Transmission lines: characteristic impedance; impedance transformation; Smith chart; impedance matching; pulse excitation. Waveguides: modes in rectangular waveguides; boundary conditions; cut-off frequencies; dispersion relations. Antennas: Dipole antennas; antenna arrays; radiation pattern; reciprocity theorem, antenna gain.

Network analysis

Network graphs: matrices associated with graphs; incidence, fundamental cut set and fundamental circuit matrices. Solution methods: nodal and mesh analysis. Network theorems: superposition, Thevenin and Norton’s maximum power transfer, Wye-Delta transformation. Steady state sinusoidal analysis using phasors. Linear constant coefficient differential equations; time domain analysis of simple RLC circuits, Solution of network equations using Laplace transform: frequency domain analysis of RLC circuits. 2-port network parameters: driving point and transfer functions. State equations for networks.

Electronic devices and circuits

Electronic devices: Energy bands in silicon, intrinsic and extrinsic silicon. Carrier transport in silicon: diffusion current, drift current, mobility, resistivity. Generation and recombination of carriers. p-n junction diode, Zener diode, tunnel diode, BJT, JFET, MOS capacitor, MOSFET, LED, p-i-n and avalanche photo diode, LASERs. Device technology: integrated circuit fabrication process, oxidation, diffusion, ion implantation, photolithography, n-tub, p-tub and twin-tub CMOS process.

Analog circuits: Equivalent circuits (large and small-signal of diodes, BJTs, JFETs, and MOSFETs. Simple diode circuits, clipping, clamping, rectifier. Biasing and bias stability of transistor and FET amplifiers. Amplifiers: single-and multi-stage, differential, operational, feedback and power. Analysis of amplifiers; frequency response of amplifiers. Simple op-amp circuits. Filters. Sinusoidal oscillators; criterion for oscillation; single-transistor and op-amp configurations. Function generators and wave-shaping circuits, Power supplies.

Digital circuits: of Boolean functions; logic gates digital IC families (DTL, TTL, ECL, MOS, CMOS). Combinational circuits: arithmetic circuits, code converters, multiplexers and decoders. Sequential circuits: latches and flip-flops, counters and shift-registers. Sample and hold circuits, ADCs, DACs. Semiconductor memories. Microprocessor 8086: architecture, programming, memory and I/O interfacing.

Signals and systems

Definitions and properties of Laplace transforms, continuous-time and discrete-time Fourier series, continuous-time and discrete-time Fourier Transform, z-transform. Sampling theorems. Linear Time-Invariant (LTI) Systems: definitions and properties; causality, stability, impulse response, convolution, poles and zeros frequency response, group delay, phase delay. Signal transmission through LTI systems. Random signals and noise: probability, random variables, probability density function, autocorrelation, power spectral density, function analogy between vectors & functions.

Control systems

Basic control system components; block diagrammatic description, reduction of block diagrams — Mason’s rule. Open loop and closed loop (negative unity feedback) systems and stability analysis of these systems. Signal flow graphs and their use in determining transfer functions of systems; transient and steady state analysis of LTI control systems and frequency response. Analysis of steady-state disturbance rejection and noise sensitivity.

Tools and techniques for LTI control system analysis and design: root loci, Routh-Hurwitz stability criterion, Bode and Nyquist plots. Control system compensators: elements of lead and lag compensation, elements of Proportional-Integral-Derivative controller (PID). Discretization of continuous time systems using Zero-order hold (ZOH) and ADCs for digital controller implementation. Limitations of digital controllers: aliasing. State variable representation and solution of state equation of LTI control systems. Linearization of Nonlinear dynamical systems with state-space realizations in both frequency and time domains. Fundamental concepts of controllability and observability for MIMO LTI systems. State space realizations: observable and controllable canonical form. Ackerman’s function for state-feedback pole placement. Design of full order and reduced order estimators.

Communications

Analog communication systems: amplitude and angle modulation and demodulation systems, spectral analysis of these operations, superheterodyne noise conditions.

Digital communication systems: pulse code modulation (PCM), [[Differential Pulse Code Modulation (DPCM), Delta modulation (DM), digital modulation schemes-amplitude, phase and frequency shift keying schemes (ASK, PSK, FSK), matched filter receivers, bandwidth consideration and probability of error calculations for these schemes, GSM, TDMA.

Education and training

Electronics engineers typically possess an academic degree with a major in electronic engineering. The length of study for such a degree is usually three or four years and the completed degree may be designated as a Bachelor of Engineering, Bachelor of Science, Bachelor of Applied Science, or Bachelor of Technology depending upon the university. Many UK universities also offer Master of Engineering (MEng) degrees at undergraduate level.

The degree generally includes units covering physics, chemistry, mathematics, project management and specific topics in electrical engineering. Initially such topics cover most, if not all, of the subfields of electronic engineering. Students then choose to specialize in one or more subfields towards the end of the degree.

Some electronics engineers also choose to pursue a postgraduate degree such as a Master of Science (MSc), Doctor of Philosophy in Engineering (PhD), or an Engineering Doctorate (EngD). The Master degree is being introduced in some European and American Universities as a first degree and the differentiation of an engineer with graduate and postgraduate studies is often difficult. In these cases, experience is taken into account. The Master’s degree may consist of either research, coursework or a mixture of the two. The Doctor of Philosophy consists of a significant research component and is often viewed as the entry point to academia.

In most countries, a Bachelor’s degree in engineering represents the first step towards certification and the degree program itself is certified by a professional body. After completing a certified degree program the engineer must satisfy a range of requirements (including work experience requirements) before being certified. Once certified the engineer is designated the title of Professional Engineer (in the United States, Canada and South Africa), Chartered Engineer or Incorporated Engineer (in the United Kingdom, Ireland, India and Zimbabwe), Chartered Professional Engineer (in Australia) or European Engineer (in much of the European Union).

Fundamental to the discipline are the sciences of physics and mathematics as these help to obtain both a qualitative and quantitative description of how such systems will work. Today most engineering work involves the use of computers and it is commonplace to use computer-aided design programs when designing electronic systems. Although most electronic engineers will understand basic circuit theory, the theories employed by engineers generally depend upon the work they do. For example, quantum mechanics and solid state physics might be relevant to an engineer working on VLSI but are largely irrelevant to engineers working with macroscopic electrical systems.

Professional bodies

Professional bodies of note for electrical engineers include the Institute of Electrical and Electronics Engineers (IEEE) and the Institution of Electrical Engineers (IEE), now the Institution of Engineering and Technology(IET). The IEEE claims to produce 30 percent of the world’s literature in electrical/electronic engineering, has over 370,000 members, and holds more than 450 IEEE sponsored or cosponsored conferences worldwide each year.

Modern electronic engineering

Electronic engineering in Europe is a very broad field that encompasses many subfields including those that deal with, electronic devices and circuit design, control systems, electronics and telecommunications, computer systems, embedded software etc. Many European universities now have departments of electronics that are completely separate from their respective departments of electrical engineering.

Subfields

Electronic engineering has many subfields. This section describes some of the most popular subfields in electronic engineering; although there are engineers who focus exclusively on one subfield, there are also many who focus on a combination of subfields.

Overview of electronic engineering

Electronic engineering involves the design and testing of electronic circuits that use the electronic properties of components such as resistors, capacitors, inductors, diodes and transistors to achieve a particular functionality.

Signal processing deals with the analysis and manipulation of signals. Signals can be either analog, in which case the signal varies continuously according to the information, or digital, in which case the signal varies according to a series of discrete values representing the information.

For analog signals, signal processing may involve the amplification and filtering of audio signals for audio equipment or the modulation and demodulation of signals for telecommunications. For digital signals, signal processing may involve the compression, error checking and error detection of digital signals.

Telecommunications engineering deals with the transmission of information across a channel such as a co-axial cable, optical fiber or free space.

Transmissions across free space require information to be encoded in a carrier wave in order to shift the information to a carrier frequency suitable for transmission, this is known as modulation. Popular analog modulation techniques include amplitude modulation and frequency modulation. The choice of modulation affects the cost and performance of a system and these two factors must be balanced carefully by the engineer.

Once the transmission characteristics of a system are determined, telecommunication engineers design the transmitters and receivers needed for such systems. These two are sometimes combined to form a two-way communication device known as a transceiver. A key consideration in the design of transmitters is their power consumption as this is closely related to their signal strength. If the signal strength of a transmitter is insufficient the signal’s information will be corrupted by noise.

Control engineering has a wide range of applications from the flight and propulsion systems of commercial airplanes to the cruise control present in many modern cars. It also plays an important role in industrial automation.

Control engineers often utilize feedback when designing control systems. For example, in a car with cruise control the vehicle’s speed is continuously monitored and fed back to the system which adjusts the engine’s power output accordingly. Where there is regular feedback, control theory can be used to determine how the system responds to such feedback.

Instrumentation engineering deals with the design of devices to measure physical quantities such as pressure, flow and temperature. These devices are known as instrumentation.

The design of such instrumentation requires a good understanding of physics that often extends beyond electromagnetic theory. For example, radar guns use the Doppler effect to measure the speed of oncoming vehicles. Similarly, thermocouples use the Peltier-Seebeck effect to measure the temperature difference between two points.

Often instrumentation is not used by itself, but instead as the sensors of larger electrical systems. For example, a thermocouple might be used to help ensure a furnace’s temperature remains constant. For this reason, instrumentation engineering is often viewed as the counterpart of control engineering.

Computer engineering deals with the design of computers and computer systems. This may involve the design of new hardware, the design of PDAs or the use of computers to control an industrial plant. Computer engineers may also work on a system’s software. However, the design of complex software systems is often the domain of software engineering, which is usually considered a separate discipline.

Desktop computers represent a tiny fraction of the devices a computer engineer might work on, as computer-like architectures are now found in a range of devices including video game consoles and DVD players.

Project engineering

For most engineers not involved at the cutting edge of system design and development, technical work accounts for only a fraction of the work they do. A lot of time is also spent on tasks such as discussing proposals with clients, preparing budgets and determining project schedules. Many senior engineers manage a team of technicians or other engineers and for this reason project management skills are important. Most engineering projects involve some form of documentation and strong written communication skills are therefore very important.

The workplaces of electronics engineers are just as varied as the types of work they do. Electronics engineers may be found in the pristine laboratory environment of a fabrication plant, the offices of a consulting firm or in a research laboratory. During their working life, electronics engineers may find themselves supervising a wide range of individuals including scientists, electricians, computer programmers and other engineers.

Obsolescence of technical skills is a serious concern for electronics engineers. Membership and participation in technical societies, regular reviews of periodicals in the field and a habit of continued learning are therefore essential to maintaining proficiency. And these are mostly used in the field of consumer electronics products.

 

# For MORE E-mail to info@makcissolutions.com .

Power Electronics

Wednesday, October 21st, 2009

Introductions 

Power electronics is the applications of solid-state electronics for the control and conversion of electric power

Power electronic converters can be found wherever there is a need to modify the electrical energy form (i.e modify its voltage, current or frequency). Therefore, their power range is from some milliwatts (as in a mobile phone) to hundreds of megawatts (e.g in a HVDC transmission system). With “classical” electronics, electrical currents and voltage are used to carry information, whereas with power electronics, they carry power. Therefore the main metric of power electronics becomes the efficiency.

The first very high power electronic devices were mercury arc valves. In modern systems the conversion is performed with semiconductor switching devices such as diodes, thyristors and transistors. In contrast to electronic systems concerned with transmission and processing of signals and data, in power electronics substantial amounts of electrical energy are processed. An AC/DC converter (rectifier) is the most typical power electronics device found in many consumer electronic devices, e.g., television sets, personal computers, battery chargers, etc. The power range is typically from tens of watts to several hundred watts. In industry the most common application is the variable speed drive (VSD) that is used to control an induction motor. The power range of VSDs start from a few hundred watts and end at tens of megawatts.

The power conversion systems can be classified according to the type of the input and output power

  • AC to DC (rectification)
  • DC to AC (inversion)
  • DC to DC (chopping)
  • AC to AC

Principle                                              

As efficiency is at a premium in a power electronic converter, the losses that a power electronic device generates should be as low as possible. The instantaneous dissipated power of a device is equal to the product of the voltage across the device and the current through it ( ). From this, one can see that the losses of a power device are at a minimum when the voltage across it is zero (the device is in the On-State) or when no current flows through it (Off-State). Therefore, a power electronic converter is built around one (or more) device operating in switching mode (either On or Off). With such a structure, the energy is transferred from the input of the converter to its output by bursts. To convert the power electronics by using rectifier

Applications

Power electronic systems are virtually in every electronic device. For example, around us:

  • DC/DC converters are used in most mobile devices (mobile phone, pda…) to maintain the voltage at a fixed value whatever the charge level of the battery is. These converters are also used for electronic isolation and power factor correction.
  • AC/DC converters (rectifiers) are used every time an electronic device is connected to the mains (computer, television,…)
  • AC/AC converters are used to change either the voltage level or the frequency (international power adapters, light dimmer). In power distribution networks AC/AC converters may be used to exchange power between utility frequency 50 Hz and 60 Hz power grids.
  • DC/AC converters (inverters) are used primarily in UPS or emergency light. During normal electricity condition, the electricity will charge the DC battery. During blackout time, the DC battery will be used to produce AC electricity at its output to power up the appliances.

Power semiconductor device

Power semiconductor devices are semiconductor devices used as switches or rectifiers in power electronic circuits (switch mode power supplies for example). They are also called power devices or when used in integrated circuits, called power ICs.

Most power semiconductor devices are only used in commutation mode (i.e they are either on or off), and are therefore optimized for this. Most of them should not be used in linear operation.

 History 

Power semiconductor devices first appeared in 1952 with the introduction of the power diode by R.N. Hall. It was made of Germanium and had a voltage capability of 200 volts and a current rating of 35 amperes.

The thyristor appeared in 1957. Thyristors are able to withstand very high reverse breakdown voltage and are also capable of carrying high current. One disadvantage of the thyristor for switching circuits is that once it is ‘latched-on’ in the conducting state it cannot be turned off by external control. The thyristor turn-off is passive, i.e., the power must be disconnected from the device.

The first bipolar transistors devices with substantial power handling capabilities were introduced in the 1960s. These components overcame some limitations of the thyristors because they can be turned on or off with an applied signal.

With the improvements of the Metal Oxide Semiconductor technology (initially developed to produce integrated circuits), power MOSFETs became available in the late 1970s. International Rectifier introduced a 25 A, 400 V power MOSFET in 1978. These devices allow operation at higher frequency than bipolar transistors, but are limited to the low voltage applications.

The Insulated Gate Bipolar Transistor (IGBT) developed in the 1980s became widely available in the 1990s. This component has the power handling capability of the bipolar transistor, with the advantages of the isolated gate drive of the power MOSFET.

Common power devices                                                                                     

Some common power devices are the power diode, thyristor, power MOSFET and IGBT (insulated gate bipolar transistor). A power diode or MOSFET operates on similar principles to its low-power counterpart, but is able to carry a larger amount of current and typically is able to support a larger reverse-bias voltage in the off-state.

Structural changes are often made in power devices to accommodate the higher current density, higher power dissipation and/or higher reverse breakdown voltage. The vast majority of the discrete (i.e non integrated) power devices are built using a vertical structure, whereas small-signal devices employ a lateral structure. With the vertical structure, the current rating of the device is proportional to its area, and the voltage blocking capability is achieved in the height of the die. With this structure, one of the connections of the device is located on the bottom of the semiconductor [die].

Common power semiconductor devices

The realm of power devices is divided into two main categories :

  • The two-terminal devices (diodes), whose state is completely dependent on the external power circuit they are connected to;
  • The three-terminal devices, whose state is not only dependent on their external power circuit, but also on the signal on their driving terminal (gate or base). Transistors and thyristors belong to that category.

A second classification is less obvious, but has a strong influence on device performance: Some devices are majority carrier devices (Schottky diode, MOSFET), while the others are minority carrier devices (Thyristor, bipolar transistor, IGBT). The former use only one type of charge carriers, while the latter use both (i.e electrons and holes). The majority carrier devices are faster, but the charge injection of minority carrier devices allows for better On-state performance.

Diodes

An ideal diode should have the following behaviour:

  • When forward-biased, the voltage across the end terminals of the diode should be zero, whatever the current that flows through it (on-state);
  • When reverse-biased, the leakage current should be zero whatever the voltage (off-state).

Moreover, the transition between on and off states should be instantaneous.

In reality, the design of a diode is a trade-off between performance in on-state, off-state and commutation. Indeed, it is the same area (actually the lightly-doped region of a PiN diode) of the device that has to sustain the blocking voltage in off-state and allow current flow in the on-state. As the requirements for the two state are completely opposite, it can be intuitively seen that a diode has to be either optimised for one of them, or time must be allowed to switch from one state to the other (i.e slow down the commutation speed).

This trade-off between on-state, off-state and switching speed is the same for all power devices. A Schottky diode has excellent switching speed and on-state performance, but a high level of leakage current in off-state. PiN diodes are commercially available in different commutation speeds (so-called fast rectifier, ultrafast rectifier…), but any increase in speed is paid by lower performance in on-state.

Switches

The trade-off between voltage, current and frequency ratings also exists for the switches. Actually, all power semiconductors rely on a PiN diode structure to sustain voltage. This can be seen in above figure. The power MOSFET has the advantages of the majority carrier devices, so it can achieve very high operating frequency, but can’t be used with high voltages. As it is a physical limit, no improvement is expected from silicon MOSFET concerning their maximum voltage ratings. However, its excellent performance in low voltage make it the device of choice (actually the only choice) for applications below 200 V. By paralleling several devices, it is possible increase the current rating of a switch. The MOSFET is particularly suited to this configuration because its positive thermal coefficient of resistance tends to balance current between individual devices.

The IGBT is a recent component, so its performance improves regularly as technology evolves. It has already completely replaced the bipolar transistor in power applications, and the availability of power modules (in which several IGBT dice are connected in parallel) makes it attractive for power levels up to several megawatts, pushing further the limit where thyristors and GTO become the only option. Basically, an IGBT is a bipolar transistor driven by a power MOSFET: it has the advantages of being a minority carrier device (good performance in on-state, even for high voltage devices), with the high input impedance of a MOSFET (it can be driven on or off with a very low amount of power). Its major limitation for low voltage applications is the high voltage drop it exhibits in on-state (2 to 4 V). Compared to the MOSFET, the operating frequency of the IGBT is relatively low (few devices are rated over 50 kHz), mainly because of a so-called ‘current-tail’ problem during turn-off. This problem is caused by the slow decay of the conduction current during turn-off resulting from slow recombination of large number of carriers, which flood the thick ‘drift’ region of the IGBT during conduction. The net result is that the turn-off switching loss of an IGBT is considerably higher than its turn-on loss. Generally, in datasheet, turn-off energy is mentioned as a measured parameter and one has to multiply that number with the switching frequency of the intended application to estimate the turn-off loss.

At very high power levels, thyristor-based devices (SCR, GTO,  MCT) are still the only choice. Though driving a thyristor is somewhat complicated, as this device can only be turned on. It turns off by itself as soon as no more current flows through it. This requires specific circuit with means to divert current, or specific applications where current is known to cancel regularly (i.e Alternating Current). Different solution have been developed to overcome this limitation (Mos Controlled Thyristors, Gate Turn Off thyristor…). These components are widely used in power distribution applications.

Parameters of power semiconductor devices

Various parameters are as follows :

  1. Breakdown voltage: Often the trade-off is between breakdown voltage rating and on-resistance because increasing the breakdown voltage by incorporating a thicker and lower doped drift region leads to higher on-resistance.
  2. On-resistance: Higher current rating lowers the on-resistance due to greater numbers of parallel cells. This increases overall capacitance and slows down the speed.
  3. Rise and fall times for switching between on and off states.
  4. Safe-operating area (from thermal dissipation and “latch-up” consideration)
  5. Thermal resistance: This is actually an often-ignored but extremely important parameter from practical system design point of view. Semiconductors do not perform well at elevated temperature but due to large current conduction, all power semiconductor device heat up. Therefore it needs to be cooled by removing that heat continuously. Packaging interface provides the path between the semiconductor device and external world to channelize the heat outside. Generally, large current devices have large die and packaging surface area and lower thermal resistance.

Research and development

Packaging

The role of packaging is to:

  • connect a die to the external circuit;
  • provide a way to remove the heat generated by the device;
  • protect the die from the external environment (moisture, dust);

Many of the reliability issues of power device are either related to excessive temperature of fatigue due to thermal cycling. Research is currently carried out on the following topics:

  • improve the cooling performance.
  • improve the resistance to thermal cycling by closely matching the Coefficient of thermal expansion of the packaging to that of the silicon.
  • increase the maximum operating temperature of the packaging material.

Research is also ongoing on electrical issues such as reducing the parasitic inductance of packaging. This inductance limits the operating frequency as it generates losses in the devices during commutation.

Low-voltage MOSFETs are also limited by the parasitic resistance of the packages, as their intrinsic on-state resistance can be as low as one or two milliohms.

Some of the most common type of power semiconductor packages include TO-220, TO-247, TO-262, TO-3, D2Pak, etc.

Improvement of structures

IGBTs are still under development and we can expect increased operating voltages in the future. At the high-power end of the range, MOS-Controlled Thyristor are promising devices. A major improvement over conventional MOSFET structure is achieved by employing superjunction charge-balance principle to the design. Essentially, it allows the thick drift region of a power MOSFET to be heavily doped (thereby reducing the electrical resistance for electron flow) without compromising the breakdown voltage. An adjacent region of similarly doped (but of opposite carrier polarity – holes) is created within the structure. These two similar but opposite doped regions effectively cancel out their mobile charge and develop a ‘depleted region’ which supports the high voltage during off-state. On the other hand, during conducting state, the higher doping of the drift region allows easier flow of carrier thereby reducing on-resistance. Commercial devices, based on this principle, have been developed by International Rectifier and Infineon in the name of CoolMOSTM.

Wide band-gap semiconductors

The major breakthrough in power semiconductor devices is expected from the replacement of silicon by a wide band-gap semiconductor. At the moment, silicon carbide (SiC) is considered to be the most promising. SiC Schottky diodes with a breakdown voltage of 1200 V are commercially available, as are 1200 V JFETs. As both are majority carrier devices, they can operate at high speed. Bipolar devices are being developed for higher voltages, up to 20 kV. Among its advantages, silicon carbide can operate at higher temperature (up to 400°C) and has a lower thermal resistance than silicon, allowing better cooling.

  • Bipolar junction transistor
  • Bootstrapping
  • FGMOS
  • Power electronics
  • Power MOSFET
  • Dimmer
  • Gate turn-off thyristor
  • IGBT
  • Integrated gate-commutated thyristor
  • Thyristor
  • Triac
  • Voltage regulator

 

# For MORE, E-mail to info@makcissolutions.com . 

Power electronics is the applications of solid-state electronics for the control and conversion of electric power.

 Introductions

Power electronic converters can be found wherever there is a need to modify the electrical energy form (i.e modify its voltage, current or frequency). Therefore, their power range is from some milliwatts (as in a mobile phone) to hundreds of megawatts (e.g in a HVDC transmission system). With “classical” electronics, electrical currents and voltage are used to carry information, whereas with power electronics, they carry power. Therefore the main metric of power electronics becomes the efficiency.

The first very high power electronic devices were mercury arc valves. In modern systems the conversion is performed with semiconductor switching devices such as diodes, thyristors and transistors. In contrast to electronic systems concerned with transmission and processing of signals and data, in power electronics substantial amounts of electrical energy are processed. An AC/DC converter (rectifier) is the most typical power electronics device found in many consumer electronic devices, e.g., television sets, personal computers, battery chargers, etc. The power range is typically from tens of watts to several hundred watts. In industry the most common application is the variable speed drive (VSD) that is used to control an induction motor. The power range of VSDs start from a few hundred watts and end at tens of megawatts.

The power conversion systems can be classified according to the type of the input and output power

  • AC to DC (rectification)
  • DC to AC (inversion)
  • DC to DC (chopping)
  • AC to AC

Principle                                              

As efficiency is at a premium in a power electronic converter, the losses that a power electronic device generates should be as low as possible. The instantaneous dissipated power of a device is equal to the product of the voltage across the device and the current through it ( ). From this, one can see that the losses of a power device are at a minimum when the voltage across it is zero (the device is in the On-State) or when no current flows through it (Off-State). Therefore, a power electronic converter is built around one (or more) device operating in switching mode (either On or Off). With such a structure, the energy is transferred from the input of the converter to its output by bursts. To convert the power electronics by using rectifier

Applications

Power electronic systems are virtually in every electronic device. For example, around us:

  • DC/DC converters are used in most mobile devices (mobile phone, pda…) to maintain the voltage at a fixed value whatever the charge level of the battery is. These converters are also used for electronic isolation and power factor correction.
  • AC/DC converters (rectifiers) are used every time an electronic device is connected to the mains (computer, television,…)
  • AC/AC converters are used to change either the voltage level or the frequency (international power adapters, light dimmer). In power distribution networks AC/AC converters may be used to exchange power between utility frequency 50 Hz and 60 Hz power grids.
  • DC/AC converters (inverters) are used primarily in UPS or emergency light. During normal electricity condition, the electricity will charge the DC battery. During blackout time, the DC battery will be used to produce AC electricity at its output to power up the appliances.

Power semiconductor device

Power semiconductor devices are semiconductor devices used as switches or rectifiers in power electronic circuits (switch mode power supplies for example). They are also called power devices or when used in integrated circuits, called power ICs.

Most power semiconductor devices are only used in commutation mode (i.e they are either on or off), and are therefore optimized for this. Most of them should not be used in linear operation.

 History

Power semiconductor devices first appeared in 1952 with the introduction of the power diode by R.N. Hall. It was made of Germanium and had a voltage capability of 200 volts and a current rating of 35 amperes.

The thyristor appeared in 1957. Thyristors are able to withstand very high reverse breakdown voltage and are also capable of carrying high current. One disadvantage of the thyristor for switching circuits is that once it is ‘latched-on’ in the conducting state it cannot be turned off by external control. The thyristor turn-off is passive, i.e., the power must be disconnected from the device.

The first bipolar transistors devices with substantial power handling capabilities were introduced in the 1960s. These components overcame some limitations of the thyristors because they can be turned on or off with an applied signal.

With the improvements of the Metal Oxide Semiconductor technology (initially developed to produce integrated circuits), power MOSFETs became available in the late 1970s. International Rectifier introduced a 25 A, 400 V power MOSFET in 1978. These devices allow operation at higher frequency than bipolar transistors, but are limited to the low voltage applications.

The Insulated Gate Bipolar Transistor (IGBT) developed in the 1980s became widely available in the 1990s. This component has the power handling capability of the bipolar transistor, with the advantages of the isolated gate drive of the power MOSFET.

Common power devices                                                                                     

Some common power devices are the power diode, thyristor, power MOSFET and IGBT (insulated gate bipolar transistor). A power diode or MOSFET operates on similar principles to its low-power counterpart, but is able to carry a larger amount of current and typically is able to support a larger reverse-bias voltage in the off-state.

Structural changes are often made in power devices to accommodate the higher current density, higher power dissipation and/or higher reverse breakdown voltage. The vast majority of the discrete (i.e non integrated) power devices are built using a vertical structure, whereas small-signal devices employ a lateral structure. With the vertical structure, the current rating of the device is proportional to its area, and the voltage blocking capability is achieved in the height of the die. With this structure, one of the connections of the device is located on the bottom of the semiconductor [die].

Common power semiconductor devices

The realm of power devices is divided into two main categories :

  • The two-terminal devices (diodes), whose state is completely dependent on the external power circuit they are connected to;
  • The three-terminal devices, whose state is not only dependent on their external power circuit, but also on the signal on their driving terminal (gate or base). Transistors and thyristors belong to that category.

A second classification is less obvious, but has a strong influence on device performance: Some devices are majority carrier devices (Schottky diode, MOSFET), while the others are minority carrier devices (Thyristor, bipolar transistor, IGBT). The former use only one type of charge carriers, while the latter use both (i.e electrons and holes). The majority carrier devices are faster, but the charge injection of minority carrier devices allows for better On-state performance.

Diodes

An ideal diode should have the following behaviour:

  • When forward-biased, the voltage across the end terminals of the diode should be zero, whatever the current that flows through it (on-state);
  • When reverse-biased, the leakage current should be zero whatever the voltage (off-state).

Moreover, the transition between on and off states should be instantaneous.

In reality, the design of a diode is a trade-off between performance in on-state, off-state and commutation. Indeed, it is the same area (actually the lightly-doped region of a PiN diode) of the device that has to sustain the blocking voltage in off-state and allow current flow in the on-state. As the requirements for the two state are completely opposite, it can be intuitively seen that a diode has to be either optimised for one of them, or time must be allowed to switch from one state to the other (i.e slow down the commutation speed).

This trade-off between on-state, off-state and switching speed is the same for all power devices. A Schottky diode has excellent switching speed and on-state performance, but a high level of leakage current in off-state. PiN diodes are commercially available in different commutation speeds (so-called fast rectifier, ultrafast rectifier…), but any increase in speed is paid by lower performance in on-state.

Switches

The trade-off between voltage, current and frequency ratings also exists for the switches. Actually, all power semiconductors rely on a PiN diode structure to sustain voltage. This can be seen in above figure. The power MOSFET has the advantages of the majority carrier devices, so it can achieve very high operating frequency, but can’t be used with high voltages. As it is a physical limit, no improvement is expected from silicon MOSFET concerning their maximum voltage ratings. However, its excellent performance in low voltage make it the device of choice (actually the only choice) for applications below 200 V. By paralleling several devices, it is possible increase the current rating of a switch. The MOSFET is particularly suited to this configuration because its positive thermal coefficient of resistance tends to balance current between individual devices.

The IGBT is a recent component, so its performance improves regularly as technology evolves. It has already completely replaced the bipolar transistor in power applications, and the availability of power modules (in which several IGBT dice are connected in parallel) makes it attractive for power levels up to several megawatts, pushing further the limit where thyristors and GTO become the only option. Basically, an IGBT is a bipolar transistor driven by a power MOSFET: it has the advantages of being a minority carrier device (good performance in on-state, even for high voltage devices), with the high input impedance of a MOSFET (it can be driven on or off with a very low amount of power). Its major limitation for low voltage applications is the high voltage drop it exhibits in on-state (2 to 4 V). Compared to the MOSFET, the operating frequency of the IGBT is relatively low (few devices are rated over 50 kHz), mainly because of a so-called ‘current-tail’ problem during turn-off. This problem is caused by the slow decay of the conduction current during turn-off resulting from slow recombination of large number of carriers, which flood the thick ‘drift’ region of the IGBT during conduction. The net result is that the turn-off switching loss of an IGBT is considerably higher than its turn-on loss. Generally, in datasheet, turn-off energy is mentioned as a measured parameter and one has to multiply that number with the switching frequency of the intended application to estimate the turn-off loss.

At very high power levels, thyristor-based devices (SCR, GTO,  MCT) are still the only choice. Though driving a thyristor is somewhat complicated, as this device can only be turned on. It turns off by itself as soon as no more current flows through it. This requires specific circuit with means to divert current, or specific applications where current is known to cancel regularly (i.e Alternating Current). Different solution have been developed to overcome this limitation (Mos Controlled Thyristors, Gate Turn Off thyristor…). These components are widely used in power distribution applications.

Parameters of power semiconductor devices

Various parameters are as follows :

  1. Breakdown voltage: Often the trade-off is between breakdown voltage rating and on-resistance because increasing the breakdown voltage by incorporating a thicker and lower doped drift region leads to higher on-resistance.
  2. On-resistance: Higher current rating lowers the on-resistance due to greater numbers of parallel cells. This increases overall capacitance and slows down the speed.
  3. Rise and fall times for switching between on and off states.
  4. Safe-operating area (from thermal dissipation and “latch-up” consideration)
  5. Thermal resistance: This is actually an often-ignored but extremely important parameter from practical system design point of view. Semiconductors do not perform well at elevated temperature but due to large current conduction, all power semiconductor device heat up. Therefore it needs to be cooled by removing that heat continuously. Packaging interface provides the path between the semiconductor device and external world to channelize the heat outside. Generally, large current devices have large die and packaging surface area and lower thermal resistance.

Research and development

Packaging

The role of packaging is to:

  • connect a die to the external circuit;
  • provide a way to remove the heat generated by the device;
  • protect the die from the external environment (moisture, dust);

Many of the reliability issues of power device are either related to excessive temperature of fatigue due to thermal cycling. Research is currently carried out on the following topics:

  • improve the cooling performance.
  • improve the resistance to thermal cycling by closely matching the Coefficient of thermal expansion of the packaging to that of the silicon.
  • increase the maximum operating temperature of the packaging material.

Research is also ongoing on electrical issues such as reducing the parasitic inductance of packaging. This inductance limits the operating frequency as it generates losses in the devices during commutation.

Low-voltage MOSFETs are also limited by the parasitic resistance of the packages, as their intrinsic on-state resistance can be as low as one or two milliohms.

Some of the most common type of power semiconductor packages include TO-220, TO-247, TO-262, TO-3, D2Pak, etc.

Improvement of structures

IGBTs are still under development and we can expect increased operating voltages in the future. At the high-power end of the range, MOS-Controlled Thyristor are promising devices. A major improvement over conventional MOSFET structure is achieved by employing superjunction charge-balance principle to the design. Essentially, it allows the thick drift region of a power MOSFET to be heavily doped (thereby reducing the electrical resistance for electron flow) without compromising the breakdown voltage. An adjacent region of similarly doped (but of opposite carrier polarity – holes) is created within the structure. These two similar but opposite doped regions effectively cancel out their mobile charge and develop a ‘depleted region’ which supports the high voltage during off-state. On the other hand, during conducting state, the higher doping of the drift region allows easier flow of carrier thereby reducing on-resistance. Commercial devices, based on this principle, have been developed by International Rectifier and Infineon in the name of CoolMOSTM.

Wide band-gap semiconductors

The major breakthrough in power semiconductor devices is expected from the replacement of silicon by a wide band-gap semiconductor. At the moment, silicon carbide (SiC) is considered to be the most promising. SiC Schottky diodes with a breakdown voltage of 1200 V are commercially available, as are 1200 V JFETs. As both are majority carrier devices, they can operate at high speed. Bipolar devices are being developed for higher voltages, up to 20 kV. Among its advantages, silicon carbide can operate at higher temperature (up to 400°C) and has a lower thermal resistance than silicon, allowing better cooling.

  • Bipolar junction transistor
  • Bootstrapping
  • FGMOS
  • Power electronics
  • Power MOSFET
  • Dimmer
  • Gate turn-off thyristor
  • IGBT
  • Integrated gate-commutated thyristor
  • Thyristor
  • Triac
  • Voltage regulator

 

# For MORE, E-mail to info@makcissolutions.com .

Marine electronics

Wednesday, October 21st, 2009

Marine electronics refers to electronics devices designed and classed for use in the marine environment where even small drops of salt water will destroy electronics devices. Therefore the majority of these types of devices are either water resistant or waterproof. A wide variety of marine electronics are available in the marketplace today. Reviews and reports on marine chartplotters, autopilots, VHF radios, network chartplotters, fish finders, and a wide range of handheld devices can be found at Marine Electronics Reviews

The term marine electronics is used for areas such as

  • Ship
  • Yacht

Marine electronics devices are

  • Chartplotter
  • Marine VHF radio
  • Autopilot/Self-steering gear
  • Fishfinder/Sonar
  • Radar
  • GPS
  • Electronic compass
  • Satellite television
  • Marine fuel management

Communication

The electronics devices communicate by using a protocol defined by NMEA. NMEA has two standards available

  • NMEA 0183
  • NMEA 2000

NMEA 0183 is based on a serial communication network. NMEA 2000 is a Controller-area network based technology.

Also different suppliers of marine electronics have their own communications protocol

  • Simrad has SimNet
  • Raymarine has SeaTalk
  • Furuno has NavNet
  • Stowe has Dataline

Companies

The international companies selling marine electronics for ships and yachts alphabetically are

  • Airmar
  • FuelTrax
  • Furuno
  • Maretron
  • Raymarine Marine Electronics
  • Simrad in Kongsberg Maritime
  • Simrad Yachting in Navico
  • Stowe Marine
  • Tinley Electronics

Companies developing marine electronics products are

  • Egersund Marine Electronics
  • Kongsberg Maritime
  • Navico

 

# For MORE E-mail to info@makcissolution.com .

Introduction of ANIMATION

Wednesday, October 21st, 2009

Animation is the rapid display of a sequence of images of 2-D or 3-D artwork or model positions in order to create an illusion of movement. It is an optical illusion of motion due to the phenomenon of persistence of vision, and can be created and demonstrated in a number of ways.

The most common method of presenting animation is as a motion picture or video program, although several other forms of presenting animation also exist.

History of animation

Early examples of attempts to capture the phenomenon of motion drawing can be found in paleolithic cave paintings, where animals are depicted with multiple legs in superimposed positions, clearly attempting to convey the perception of motion.

A 5,200 year old earthen bowl found in Iran in shakira has five images of a goat painted along the sides. This has been claimed to be an example of early animation. However, since no equipment existed to show the images in motion, such a series of images cannot be called animation in a true sense of the word.

The phenakistoscope, praxinoscope, as well as the common flip book were early popular animation devices invented during the 1800s, while a Chinese zoetrope-type device was invented already in 180 AD. These devices produced movement from sequential drawings using technological means, but animation did not really develop much further until the advent of cinematography.

There is no single person who can be considered the “creator” of the art of film animation, as there were several people doing several projects which could be considered various types of animation all around the same time.

Georges Méliès was a creator of special-effect films; he was generally one of the first people to use animation with his technique. He discovered a technique by accident which was to stop the camera rolling to change something in the scene, and then continue rolling the film. This idea was later known as stop-motion animation. Méliès discovered this technique accidentally when his camera broke down while shooting a bus driving by. When he had fixed the camera, a hearse happened to be passing by just as Méliès restarted rolling the film, his end result was that he had managed to make a bus transform into a hearse. This was just one of the great contributors to animation in the early years.

The earliest surviving stop-motion advertising film was an English short by Arthur Melbourne-Cooper called Matches: An Appeal (1899). Developed for the Bryant and May Matchsticks company, it involved stop-motion animation of wired-together matches writing a patriotic call to action on a blackboard.

J. Stuart Blackton was possibly the first American filmmaker to use the techniques of stop-motion and hand-drawn animation. Introduced to filmmaking by Edison, he pioneered these concepts at the turn of the 20th century, with his first copyrighted work dated 1900. Several of his films, among them The Enchanted Drawing (1900) and Humorous Phases of Funny Faces (1906) were film versions of Blackton’s “lightning artist” routine, and utilized modified versions of Méliès’ early stop-motion techniques to make a series of blackboard drawings appear to move and reshape themselves. ‘Humorous Phases of Funny Faces’ is regularly cited as the first true animated film, and Blackton is considered the first true animator.

Another French artist, Émile Cohl, began drawing cartoon strips and created a film in 1908 called Fantasmagorie. The film largely consisted of a stick figure moving about and encountering all manner of morphing objects, such as a wine bottle that transforms into a flower. There were also sections of live action where the animator’s hands would enter the scene. The film was created by drawing each frame on paper and then shooting each frame onto negative film, which gave the picture a blackboard look. This makes Fantasmagorie the first animated film created using what came to be known as traditional (hand-drawn) animation.

Following the successes of Blackton and Cohl, many other artists began experimenting with animation. One such artist was Winsor McCay, a successful newspaper cartoonist, who created detailed animations that required a team of artists and painstaking attention for detail. Each frame was drawn on paper; which invariably required backgrounds and characters to be redrawn and animated. Among McCay’s most noted films are Little Nemo (1911), Gertie the Dinosaur (1914) and The Sinking of the Lusitania (1918).

The production of animated short films, typically referred to as “cartoons”, became an industry of its own during the 1910s, and cartoon shorts were produced to be shown in movie theaters. The most successful early animation producer was John Randolph Bray, who, along with animator Earl Hurd, patented the cel animation process which dominated the animation industry for the rest of the decade.

Techniques

Traditional animation

Traditional animation (also called cel animation or hand-drawn animation) was the process used for most animated films of the 20th century. The individual frames of a traditionally animated film are photographs of drawings, which are first drawn on paper. To create the illusion of movement, each drawing differs slightly from the one before it. The animators’ drawings are traced or photocopied onto transparent acetate sheets called cels, which are filled in with paints in assigned colors or tones on the side opposite the line drawings. The completed character cels are photographed one-by-one onto motion picture film against a painted background by a rostrum camera.

The traditional cel animation process became obsolete by the beginning of the 21st century. Today, animators’ drawings and the backgrounds are either scanned into or drawn directly into a computer system. Various software programs are used to color the drawings and simulate camera movement and effects. The final animated piece is output to one of several delivery media, including traditional 35 mm film and newer media such as digital video. The “look” of traditional cel animation is still preserved, and the character animators’ work has remained essentially the same over the past 70 years. Some animation producers have used the term “tradigital” to describe cel animation which makes extensive use of computer technology.

Examples of traditionally animated feature films include Pinocchio (United States, 1940), Animal Farm (United Kingdom, 1954), and Akira (Japan, 1988). Traditional animated films which were produced with the aid of computer technology include The Lion King (US, 1994) Sen to Chihiro no Kamikakushi (Spirited Away) (Japan, 2001), Treasure Planet (USA, 2002) and Les Triplettes de Belleville (2003).

  • Full animation refers to the process of producing high-quality traditionally animated films, which regularly use detailed drawings and plausible movement. Fully animated films can be done in a variety of styles, from realistically designed works such as those produced by the Walt Disney studio, to the more “cartoony” styles of those produced by the Warner Bros. animation studio. Many of the Disney animated features are examples of full animation, as are non-Disney works such as The Secret of NIMH (US, 1982) and The Iron Giant (US, 1999), Nocturna (Spain, 2007)
  • Limited animation involves the use of less detailed and/or more stylized drawings and methods of movement. Pioneered by the artists at the American studio United Productions of America, limited animation can be used as a method of stylized artistic expression, as in Gerald McBoing Boing (US, 1951), Yellow Submarine (UK, 1968), and much of the anime produced in Japan. Its primary use, however, has been in producing cost-effective animated content for media such as television (the work of Hanna-Barbera, Filmation, and other TV animation studios) and later the Internet (web cartoons). Some examples are; Spongebob Squarepants (USA, 1999-present), The Fairy Oddparents (USA, 2001-present) and Invader Zim (USA, 2001-2006).
  • Rotoscoping is a technique, patented by Max Fleischer in 1917, where animators trace live-action movement, frame by frame. The source film can be directly copied from actors’ outlines into animated drawings, as in The Lord of the Rings (US, 1978), used as a basis and inspiration for character animation, as in most Disney films, or used in a stylized and expressive manner, as in Waking Life (US, 2001) and A Scanner Darkly (US, 2006). Some other examples are; Ralf Bakshi’s The Lord of the Rings (USA, 1978) and Fire and Ice (USA, 1983), Heavy Metal (1981).
  • Live-action/animation is a technique, when combining hand-drawn characters into live action shots. One of the earlier uses of it was Koko the Clown when Koko was drawn over live action footage. Other examples would include Who Framed Roger Rabbit? (USA, 1988), Space Jam (USA, 1996) and Osmosis Jones (USA, 2002).
  • Anime is a technique primarily used in Japan but originated in USA. It usually consists of detailed characters but more of a stiff animation. mouth moments primarily use 2-3 frames, leg moments use about 6-10, etc. A lot of the time the eyes are very detailed, so sometimes instead of the animator drawing them over again in every frame, two eyes will be drawn in 5-6 angles and pasted on each frame(modern times uses computer for that). Some example of Anime films are; Spirited Away (Japan, 2001), Akira (Japan, 1988) and Princess Mononoke.

Stop motion

Stop-motion animation is used to describe animation created by physically manipulating real-world objects and photographing them one frame of film at a time to create the illusion of movement. There are many different types of stop-motion animation, usually named after the type of media used to create the animation. Computer software is widely available to create this type of animation.

  • Puppet animation typically involves stop-motion puppet figures interacting with each other in a constructed environment, in contrast to the real-world interaction in model animation. The puppets generally have an armature inside of them to keep them still and steady as well as constraining them to move at particular joints. Examples include The Tale of the Fox (France, 1937), Nightmare Before Christmas (US, 1993), Corpse Bride (US, 2005), Coraline (US, 2009), the films of Ji?í Trnka and the TV series Robot Chicken (US, 2005–present).
    • Puppetoon, created using techniques developed by George Pál, are puppet-animated films which typically use a different version of a puppet for different frames, rather than simply manipulating one existing puppet.
  • Clay animation, or Plasticine animation often abbreviated as claymation, uses figures made of clay or a similar malleable material to create stop-motion animation. The figures may have an armature or wire frame inside of them, similar to the related puppet animation (below), that can be manipulated in order to pose the figures. Alternatively, the figures may be made entirely of clay, such as in the films of Bruce Bickford, where clay creatures morph into a variety of different shapes. Examples of clay-animated works include The Gumby Show (US, 1957–1967) Morph shorts (UK, 1977–2000), Wallace and Gromit shorts (UK, as of 1989), Jan Švankmajer’s Dimensions of Dialogue (Czechoslovakia, 1982), The Trap Door (UK, 1984). Films include Wallace and Gromit: Curse of the Were-Rabbit and The Adventures of Mark Twain
  • Cutout animation is a type of stop-motion animation produced by moving 2-dimensional pieces of material such as paper or cloth. Examples include Terry Gilliam’s animated sequences from Monty Python’s Flying Circus (UK, 1969-1974); Fantastic Planet (France/Czechoslovakia, 1973) ; Tale of Tales (Russia, 1979), The pilot episode of the TV series (and sometimes in episodes) of South Park (US, 1997).
    • Silhouette animation is a variant of cutout animation in which the characters are backlit and only visible as silhouettes. Examples include The Adventures of Prince Achmed (Weimar Republic, 1926) and Princes et princesses (France, 2000).
  • Model animation refers to stop-motion animation created to interact with and exist as a part of a live-action world. Intercutting, matte effects, and split screens are often employed to blend stop-motion characters or objects with live actors and settings. Examples include the work of Ray Harryhausen, as seen in films such Jason and the Argonauts (1961), and the work of Willis O’Brien on films such as King Kong (1933 film).
    • Go motion is a variant of model animation which uses various techniques to create motion blur between frames of film, which is not present in traditional stop-motion. The technique was invented by Industrial Light & Magic and Phil Tippett to create special effects scenes for the film The Empire Strikes Back(1980).
  • Object animation refers to the use of regular inanimate objects in stop-motion animation, as opposed to specially created items. One example of object animation is the brickfilm, which incorporates the use of plastic toy construction blocks such as Lego.
    • Graphic animation uses non-drawn flat visual graphic material (photographs, newspaper clippings, magazines, etc.) which are sometimes manipulated frame-by-frame to create movement. At other times, the graphics remain stationary, while the stop-motion camera is moved to create on-screen action.
  • Pixilation involves the use of live humans as stop motion characters. This allows for a number of surreal effects, including disappearances and reappearances, allowing people to appear to slide across the ground, and other such effects. Examples of pixilation include The Secret Adventures of Tom Thumb and Angry Kid shorts.

Computer animation

Computer animation encompasses a variety of techniques, the unifying factor being that the animation is created digitally on a computer.

2D animation

2D animation figures are created and/or edited on the computer using 2D bitmap graphics or created and edited using 2D vector graphics. This includes automated computerized versions of traditional animation techniques such as of tweening, morphing, onion skinning and interpolated rotoscoping.

  • Analog computer animation
  • Flash animation
  • PowerPoint animation

3D animation

3D animation digital models manipulated by an animator. In order to manipulate a mesh, it is given a digital skeletal structure that can be used to control the mesh. This process is called rigging. Various other techniques can be applied, such as mathematical functions (ex. gravity, particle simulations), simulated fur or hair, effects such as fire and water and the use of Motion capture to name but a few, these techniques fall under the category of 3d dynamics. Many 3D animations are very believable and are commonly used as Visual effects for recent movies.

2D animation techniques tend to focus on image manipulation while 3D techniques usually build virtual worlds in which characters and objects move and interact. 3D animation can create images that seem real to the viewer.

Other animation techniques

  • Drawn on film animation: a technique where footage is produced by creating the images directly on film stock, for example by Norman McLaren, Len Lye and Stan Brakhage.
  • Paint-on-glass animation: a technique for making animated films by manipulating slow drying oil paints on sheets of glass.
  • Pinscreen animation: makes use of a screen filled with movable pins, which can be moved in or out by pressing an object onto the screen. The screen is lit from the side so that the pins cast shadows. The technique has been used to create animated films with a range of textural effects difficult to achieve with traditional cel animation.
  • Sand animation: sand is moved around on a backlighted or frontlighted piece of glass to create each frame for an animated film. This creates an interesting effect when animated because of the light contrast.
  • Flip book: A flip book (sometimes, especially in British English, flick book) is a book with a series of pictures that vary gradually from one page to the next, so that when the pages are turned rapidly, the pictures appear to animate by simulating motion or some other change. Flip books are often illustrated books for children, but may also be geared towards adults and employ a series of photographs rather than drawings. Flip books are not always separate books, but may appear as an added feature in ordinary books or magazines, often in the page corners. Software packages and websites are also available that convert digital video files into custom-made flip books.

$ The 1906 cartoon Humorous Phases of Funny Faces by J. Stuart Blackton, regarded to be the first animated film.

 

# For MORE E-mail to info@makcissolutions.com .

History of ANIMATION

Wednesday, October 21st, 2009

Animation is an art form which, in its modern apearance, appeared alongside the development of motion pictures. Earlier attempts at making drawings move were only experimental.

The past

Cave paintings

The earliest examples derive from still drawings, which can be found in Palaeolithic cave paintings, where animals are depicted with multiple sets of legs in superimposed positions, clearly attempting to convey the perception of motion.

Pottery of Persia

A 5,200-year old earthen bowl found in Iran in Shahr-i Sokhta has five images painted along the sides. It shows phases of a goat leaping up to a tree to take a pear. However, since no equipment existed to show the images in motion, such a series of images cannot be called animation in a true sense of the word. Similar forms of sequential images can also be found in medieval Persian Islamic pottery.

Egyptian murals

An Egyptian mural, approximately 4000 years old, shows wrestlers in action. Even though this may appear similar to a series of animation drawings, there was no way of viewing the images in motion. It does, however, indicate the artist’s intention of depicting motion.

Zoetrope

A zoetrope is a device which creates the image of a moving picture. The earliest elementary zoetrope was created in China around 180 AD by the prolific inventor Ting Huan. Driven by convection Ting Huan’s device hung over a lamp. The rising air turned vanes at the top from which were hung translucent paper or mica panels. Pictures painted on the panels would appear to move if the device is spun at the right speed.

The modern zoetrope contraption was produced in 1834 by William George Horner. The device is basically a cylinder with vertical slits around the sides. Around the inside edge of the cylinder there are a series of pictures on the opposite side to the slits. As the cylinder is spun, the user then looks through the slits producing the illusion of motion. No one thought this small device would be the initial beginnings for the animation world to come. As a matter a fact, in present day beginning animation classes, the Zoetrope is still being used to illustrate early concepts of animation.

Leonardo shoulder study (ca. 1510)

Seven drawings by Leonardo da Vinci extending over two folios in the Windsor Collection, Anatomical Studies of the Muscles of the Neck, Shoulder, Chest, and Arm, show detailed drawings of the upper body (with a less-detailed facial image), illustrating the changes as the torso turns from profile to frontal position and the forearm extends.

The magic lantern

The magic lantern is the predecessor of the modern day projector. It consisted of a translucent oil painting and a simple lamp. When put together in a darkened room, the image would appear larger on a flat surface. Athanasius Kircher spoke about this originating from China in the 16th century. Some slides for the lanterns contained parts that could be mechanically actuated to present limited movement on the screen.

Thaumatrope (1824)

A thaumatrope was a simple toy used in the Victorian era. It was a small circular disk or card with two different pictures on each side that was attached to a piece of string running through the centre. When the string were twirled quickly between the fingers the two pictures appear to combine into a single image. The creator of this invention may have been either John Ayrton Paris or Charles Babbage.

Phenakistoscope (1831)

The phenakistoscope was an early animation device, the predecessor of the zoetrope. It was invented in 1831 simultaneously by the Belgian Joseph Plateau and the Austrian Simon von Stampfer.

Praxinoscope (1877)

The praxinoscope, invented by French scientist Charles-Émile Reynaud, was a more sophisticated version of the zoetrope. It used the same basic mechanism of a strip of images placed on the inside of a spinning cylinder, but instead of viewing it through slits, it was viewed in a series of small, stationary mirrors around the inside of the cylinder, so that the animation would stay in place, and provide a clearer image and better quality. Reynaud also developed a larger version of the praxinoscope that could be projected onto a screen, called the Théâtre Optique.

Flip book (1868)

The first flip book was patented in 1868 by a John Barns Linnet. Flip books were yet another development that brought us closer to modern animation. Like the Zoetrope, the Flip Book creates the illusion of motion. A set of sequential pictures flipped at a high speed creates this effect. The Mutoscope (1894) is basically a flip book in a box with a crank handle to flip the pages.

The present

Traditional animation

The first animated film was created by Charles-Émile Reynaud, inventor of the praxinoscope, an animation system using loops of 12 pictures. On October 28, 1892 at Musée Grévin in Paris, France he exhibited animations consisting of loops of about 500 frames, using his Théâtre Optique system – similar in principle to a modern film projector.

The first animated work on standard picture film was Humorous Phases of Funny Faces (1906) by J. Stuart Blackton. It features a cartoonist drawing faces on a chalkboard, and the faces apparently coming to life.

The first puppet-animated film was The Beautiful Lukanida (1912) by the Russian-born (ethnically Polish) director Wladyslaw Starewicz (Ladislas Starevich).

The first animated feature film was El Apóstol, made in 1917 by Quirino Cristiani from Argentina. He also directed two other animated feature films, including 1931’s Peludopolis, the first to use synchronized sound. None of these, however, survive to the present day. The earliest-surviving animated feature, which used colour-tinted scenes, is the silhouette-animated Adventures of Prince Achmed (1926) directed by German Lotte Reiniger and French/Hungarian Berthold Bartosch. Walt Disney’s Snow White and the Seven Dwarfs (1937), often considered to be the first animated feature when in fact at least eight were previously released. However, Snow White was the first to become successful and well-known within the English-speaking world.

The first animation to use the full, three-color Technicolor method was Flowers and Trees (1932) made by Disney Studios which won an academy award for this work.

 Stop motion

Stop motion is used for many animation productions using physical objects rather than images of people, as with traditional animation. An object will be photographed, moved slightly, and then photographed again. When the pictures are played back in normal speed the object will appear to move by itself. This process is used for many productions, for example, clay animations such as Chicken Run and Wallace and Gromit, as well as animated movies which use poseable figures, such as The Nightmare Before Christmas and James and the Giant Peach.

Stop motion animation was also commonly used for special effects work in many live-action films, such as the 1933 version of King Kong and The 7th Voyage of Sinbad.

CGI animation

Computer-generated imagery (CGI) revolutionized animation. The first film done completely in CGI was Toy Story, produced by Pixar. The process of CGI animation is still very tedious and similar in that sense to traditional animation, and it still adheres to many of the same principles.

A principal difference of CGI Animation compared to traditional animation is that drawing is replaced by 3D modeling, almost like virtual version of stop-motion, though a form of animation that combines the two worlds can be considered to be computer aided animation but on 2D computer drawing (which can be considered close to traditional drawing and sometimes based on it).

The future

Animated humans

Most CGI created films are based on animal characters, monsters, machines or cartoon-like humans. Animation studios are now trying to develop ways of creating realistic-looking humans. Films that have attempted this include Final Fantasy: The Spirits Within in 2001, Final Fantas : Advent Children in 2005, The Polar Express in 2004, and Beowulf in 2007. However, due to the complexity of human body functions, emotions and interactions, this method of animation is rarely used. The more realistic a CG character becomes, the more difficult it is to create the nuances and details of a living person. The creation of hair and clothing that move convincingly with the animated human character is another area of difficulty.

Cel-shaded animation

A type of non-photorealistic rendering designed to make computer graphics appear to be hand-drawn. Cel-shading is often used to mimic the style of a comic book or cartoon. It is a somewhat recent addition to computer graphics, most commonly turning up in console video games. Though the end result of cel-shading has a very simplistic feel like that of hand-drawn animation, the process is complex. The name comes from the clear sheets of acetate, called cels, that are painted on for use in traditional 2D animation. It may be considered a “2.5D” form of animation. True real-time cel-shading was first introduced in 2000 by Sega’s Jet Set Radio for their Dreamcast console. Besides video games, a number of anime have also used this style of animation, such as Freedom Project in 2006.

 

# For MORE, E-mail to info@makcissolutions.com .