Subtitles

The synchronized display of the text of a translation or transcription of the dialogue at the bottom of the screen during the scenes when sound is available but not understood.

The term subtitles refers to the synchronized display of the text of a translation or transcription of the dialogue at the bottom of the screen during the scenes when sound is available but not understood. The term can also refer to printed statements or fragments of dialogue appearing on the screen between the scenes of a silent movie. They can either be a form of written translation of a dialog in a foreign language, or a written rendering of the dialog in the same language, with or without added information to help viewers who are deaf or hard of hearing to follow the dialog or people who cannot understand the spoken dialogue or who have accent recognition problems.

The encoded method can either be pre-rendered with the video or separate as either a graphic or text to be rendered and overlaid by the receiver. The separate subtitles are used for DVD, Blu-ray, and television Teletext/Digital Video Broadcasting (DVB) subtitling or EIA-608 captioning, which are hidden unless requested by the viewer from a menu or remote controller key or by selecting the relevant page or service (e.g., p. 888 or CC1), always carry additional sound representations for deaf and hard of hearing viewers. Teletext subtitle language follows the original audio, except in multi-lingual countries where the broadcaster may provide subtitles in additional languages on other Teletext pages. EIA-608 captions are similar, except that North American Spanish stations may provide captioning in Spanish on CC3. DVD and Blu-ray only differ in using run-length encoded graphics instead of text, as well as some HD DVB broadcasts.

Sometimes, mainly at film festivals, subtitles may be shown on a separate display below the screen, thus saving the filmmaker from creating a subtitled copy for perhaps just one show. Television subtitling for the deaf and hard of hearing is also referred to as closed captioning in some countries.

More exceptional uses also include operas, such as Verdi's Aida, where sung lyrics in Italian are subtitled in English or in another local language outside the stage area on luminous screens for the audience to follow the storyline, or on a screen attached to the back of the chairs in front of the audience.

The word subtitle is the prefix sub- ("below") followed by the title. In some cases, such as live opera, the dialog is displayed above the stage in what is referred to as surtitles (sur- meaning "above").

Today, professional subtitlers usually work with specialized computer software and hardware where the video is digitally stored on a hard disk, making each individual frame instantly accessible. Besides creating the subtitles, the subtitler usually also tells the computer software the exact positions where each subtitle should appear and disappear. For cinema film, this task is traditionally done by separate technicians. The end result is a subtitle file containing the actual subtitles as well as position markers indicating where each subtitle should appear and disappear. These markers are usually based on timecode if it is a work for electronic media (e.g., TV, video, DVD), or on film length (measured in feet and frames) if the subtitles are to be used for traditional cinema film.

The finished subtitle file is used to add the subtitles to the picture, either:

  • directly into the picture (open subtitles);
  • embedded in the vertical interval and later superimposed on the picture by the end-user with the help of an external decoder or a decoder built into the TV (closed subtitles on TV or video);
  • or converted (rendered) to tiff or BMP graphics that are later superimposed on the picture by the end user's equipment (closed subtitles on DVD or as part of a DVB broadcast).

Subtitles can also be created by individuals using freely available subtitle-creation software like Subtitle Workshop for Windows, MovieCaptioner for Mac/Windows, and Subtitle Composer for Linux, and then hardcode them onto a video file with programs such as VirtualDub in combination with VSFilter which could also be used to show subtitles as softsubs in many software video players.


Closed captioning and subtitling are both processes of displaying text to provide additional or interpretive information. Both are typically used as a transcription of the audio portion of a program as it occurs (either verbatim or in edited form), sometimes including descriptions of non-speech elements. Other uses have been to provide a textual alternative language translation of a presentation's primary audio language that is usually burned-in (or "open") to the video and unselectable. HTML5 defines subtitles as a "transcription or translation of the dialogue ... when sound is available but not understood" by the viewer (for example, dialogue in a foreign language) and captions as a "transcription or translation of the dialogue, sound effects, relevant musical cues, and other relevant audio information ... when sound is unavailable or not clearly audible" (for example, when audio is muted or the viewer is deaf or hard of hearing).


Automatic captioning

Some programs and online software allow automatic captions, mainly using speech-to-text features.

For example, on YouTube, automatic captions are available in English, Dutch, French, German, Italian, Japanese, Korean, Portuguese, Russian, and Spanish. If automatic captions are available for the language, they'll automatically be published on the video, using the YT Video Manager in the Creator Studio.
Same-language captions

Same-language captions, i.e., without translation, were primarily intended as an aid for people who are deaf or hard of hearing. Internationally, there are several major studies that demonstrate that same-language captioning can have a major impact on literacy and reading growth across a broad range of reading abilities. This method of subtitling is used by national television broadcasters in China and in India such as Doordarshan. This idea was struck upon by Brij Kothari, who believed that SLS makes reading practice an incidental, automatic, and subconscious part of popular TV entertainment, at a low per-person cost to shore up literacy rates in India.

Same-language subtitling

Same language subtitling (SLS) is the use of synchronized captioning of musical lyrics (or any text with an audio/video source) as a repeated reading activity. The basic reading activity involves students viewing a short subtitled presentation projected onscreen while completing a response worksheet. To be really effective, the subtitling should have high-quality synchronization of audio and text, and better yet, subtitling should change color in syllabic synchronization to audio model, and the text should be at a level to challenge students' language abilities.

Closed captions

The "CC in a TV" symbol Jack Foley created, while senior graphic designer at Boston public broadcaster WGBH that invented captioning for television, is public domain so that anyone who captions TV programs can use it.

Closed captioning is the American term for closed subtitles specifically intended for people who are deaf or hard of hearing. These are a transcription rather than a translation, and usually contain descriptions of important non-dialog audio as well such as "(sighs)", "(wind blowing)", "("SONG TITLE" playing)", "(kisses)" or "(door creaks)" and lyrics. From the expression "closed captions" the word "caption" has in recent years come to mean a subtitle intended for the deaf or hard of hearing, be it "open" or "closed". In British English "subtitles" usually refers to subtitles for the deaf or hard of hearing (SDH); however, the term "SDH" is sometimes used when there is a need to make a distinction between the two.

Real-time

Programs such as news bulletins, current affairs programs, sport, some talk shows, and political and special events utilize real-time or online captioning. Live captioning is increasingly common, especially in the United Kingdom and the United States, as a result of regulations that stipulate that virtually all TV eventually must be accessible for people who are deaf and hard–of–hearing. In practice, however, these "real-time" subtitles will typically lag the audio by several seconds due to the inherent delay in transcribing, encoding, and transmitting the subtitles. Real-time subtitles are also challenged by typographic errors or mishearing of the spoken words, with no time available to correct before transmission.

Pre-prepared

Some programs may be prepared in their entirety several hours before broadcast, but with insufficient time to prepare a timecoded caption file for automatic play-out. Pre-prepared captions look similar to offline captions, although the accuracy of cueing may be compromised slightly as the captions are not locked to program timecode.

Newsroom captioning involves the automatic transfer of text from the newsroom computer system to a device that outputs it as captions. It does work, but its suitability as an exclusive system would only apply to programs that had been scripted in their entirety on the newsroom computer system, such as short interstitial updates.

In the United States and Canada, some broadcasters have used it exclusively and simply left uncaptioned sections of the bulletin for which a script was unavailable. Newsroom captioning limits captions to pre-scripted materials and, therefore, does not cover 100% of the news, weather and sports segments of a typical local news broadcast which are typically not pre-scripted, last-second breaking news or changes to the scripts, ad-lib conversations of the broadcasters, emergency or other live remote broadcasts by reporters in-the-field. By failing to cover items such as these, newsroom style captioning (or use of the Teleprompter for captioning) typically results in coverage of less than 30% of a local news broadcast.

Live

Communication Access Real-Time Translation (CART) stenographers, who use a computer with using either stenotype or Velotype keyboards to transcribe stenographic input for presentation as captions within 2–3 seconds of the representing audio, must caption anything which is purely live and unscripted; however, the most recent developments include operators using speech recognition software and revoicing the dialog. Speech recognition technology has advanced so quickly in the United States that about 50% of all live captioning is through speech recognition as of 2005. Real-time captions look different from offline captions, as they are presented as a continuous flow of the text as people speak.

Real-time stenographers are the most highly skilled in their profession. Stenography is a system of rendering words phonetically, and English, with its multitude of homophones (e.g., there, they're, they’re), is particularly unsuited to easy transcriptions. Stenographers working in courts and inquiries usually have 24 hours in which to deliver their transcripts. Consequently, they may enter the same phonetic stenographic codes for a variety of homophones, and fix up the spelling later. Real-time stenographers must deliver their transcriptions accurately and immediately. They must, therefore, develop techniques for keying homophones differently, and be unswayed by the pressures of delivering an accurate product on immediate demand.

Submissions to recent captioning-related inquiries have revealed concerns from broadcasters about captioning sports. Captioning sports may also affect many different people because of the weather outside of it. In much sport captioning's absence, the Australian Caption Centre submitted to the National Working Party on Captioning (NWPC), in November 1998, three examples of sport captioning, each performed on tennis, rugby league, and swimming programs:

  • Heavily reduced: Captioners ignore commentary and provide only scores and essential information such as “try” or “out”.
  • Significantly reduced: Captioners use QWERTY input to type summary captions yielding the essence of what the commentators are saying, delayed due to the limitations of QWERTY input.
  • Comprehensive real-time: Captioners use stenography to caption the commentary in its entirety.

The NWPC concluded that the standard they accept is the comprehensive real-time method, which gives them access to the commentary in its entirety. Also, not all sports are live. Many events are pre-recorded hours before they are broadcast, allowing a captioner to caption them using offline methods.
Hybrid

Because different programs are produced under different conditions, a case-by-case basis must consequently determine captioning methodology. Some bulletins may have a high incidence of truly live material or insufficient access to video feeds and scripts may be provided to the captioning facility, making stenography unavoidable. Other bulletins may be pre-recorded just before going to air, making pre-prepared text preferable.

In Australia and the United Kingdom, hybrid methodologies have proven to be the best way to provide comprehensive, accurate, and cost-effective captions on news and current affairs programs. News captioning applications currently available are designed to accept text from a variety of inputs: stenography, Velotype, QWERTY, ASCII import, and the newsroom computer. This allows one facility to handle a variety of online captioning requirements and to ensure that captioners properly caption all programs.

Current affairs programs usually require stenographic assistance. Even though the segments which comprise a current affairs program may be produced in advance, they are usually done so just before on-air time and their duration makes QWERTY input of text unfeasible.

News bulletins, on the other hand, can often be captioned without stenographic input (unless there are live crosses or ad-libbing by the presenters). This is because:

  • Most items are scripted on the newsroom computer system and this text can be electronically imported into the captioning system.
  • Individual news stories are of short duration, so even if they are made available only just prior to broadcast, there is still time to QWERTY in text.

Offline

For non-live, or pre-recorded programs, television program providers can choose offline captioning. Captioners gear offline captioning toward the high-end television industry, providing highly customized captioning features, such as pop-on style captions, specialized screen placement, speaker identifications, italics, special characters, and sound effects.

Offline captioning involves a five-step design and editing process and does much more than simply display the text of a program. Offline captioning helps the viewer follow a storyline, become aware of mood and feeling, and allows them to fully enjoy the entire viewing experience. Offline captioning is the preferred presentation style for entertainment-type programming.

Subtitles for the deaf or hard-of-hearing (SDH)

Subtitles for the deaf or hard-of-hearing (SDH) is an American term introduced by the DVD industry. It refers to regular subtitles in the original language where important non-dialog information has been added, as well as speaker identification, which may be useful when the viewer cannot otherwise visually tell who is saying what.

The only significant difference for the user between SDH subtitles and closed captions is their appearance: SDH subtitles usually are displayed with the same proportional font used for the translation subtitles on the DVD; however, closed captions are displayed as white text on a black band, which blocks a large portion of the view. Closed captioning is falling out of favor as many users have no difficulty reading SDH subtitles, which are text with contrast outlines. In addition, DVD subtitles can specify many colors, on the same character: primary, outline, shadow, and background. This allows subtitlers to display subtitles on a usually translucent band for easier reading; however, this is rare, since most subtitles use an outline and shadow instead, in order to block a smaller portion of the picture. Closed captions may still supersede DVD subtitles since many SDH subtitles present all of the text-centered, while closed captions usually specify the position on the screen: centered, left align, right align, top, etc. This is helpful for speaker identification and overlapping conversation. Some SDH subtitles (such as the subtitles of newer Universal Studios DVDs/Blu-ray Discs) do have positioning, but it is not as common.

DVDs for the U.S. market now sometimes have three forms of English subtitles: SDH subtitles; English subtitles, helpful for viewers who may not be hearing impaired but whose first language may not be English (although they are usually an exact transcript and not simplified); and closed caption data that is decoded by the end-users closed caption decoder. Most anime releases in the U.S. only include subtitles translations of the original material; therefore, SDH subtitles of English dubs ("dubtitles") are uncommon.

High-definition disc media (HD DVD, Blu-ray Disc) uses SDH subtitles as the sole method because technical specifications do not require HD to support line 21 closed captions. Some Blu-ray Discs, however, are said to carry a closed caption stream that only displays through standard-definition connections. Many HDTVs allow the end-user to customize the captions, including the ability to remove the black band.

Song lyrics are not always captioned, as additional copyright permissions may be required to reproduce the lyrics on-screen as part of the subtitle track. In October 2015, major studios and Netflix were sued over this practice, citing claims of false advertising (as the work is henceforth not completely subtitled) and civil rights violations (under California's Unruh Civil Rights Act, guaranteeing equal rights for people with disabilities). Judge Stephen Victor Wilson dismissed the suit in September 2016, ruling that allegations of civil rights violations did not present evidence of intentional discrimination against viewers with disabilities and that allegations over misrepresenting the extent of subtitles "fall far short of demonstrating that reasonable consumers would actually be deceived as to the amount of subtitled content provided, as there are no representations whatsoever that all song lyrics would be captioned, or even that the content would be 'fully' captioned."

Although same-language subtitles and captions are produced primarily with the deaf and hard of hearing in mind, they may also be used to ensure understanding of dialogue (such as those spoken quietly or mixed in with sound effects, by those with accents unfamiliar to the intended audience, or supportive dialogue from background or off-screen characters). Jason Kehe of Wired noted that habitual use of subtitles by the non-deaf has been a growing trend for these reasons, and to help pick up on additional details and information found within the dialogue. He drew comparisons to the ubiquity of search engines by stating that "just like Google, closed captions are there, eminently accessible, ready to clarify the unclarities, and so, desperately, we, the paranoids and obsessive-compulsives and postmodern completists, click." Studies (including those by the University of Nottingham and the What Works Clearinghouse of the United States Department of Education) have found that the use of subtitles can help promote reading comprehension in school-aged children.

Asia

In some Asian television programming, captioning is considered a part of the genre, and has evolved beyond simply capturing what is being said. The captions are used artistically; it is common to see the words appear one by one as they are spoken, in a multitude of fonts, colors and sizes that capture the spirit of what is being said. Languages like Japanese also have a rich vocabulary of onomatopoeia which is used in captioning.
East Asia

In some East Asian countries, especially Chinese-speaking ones, subtitling is common in all taped television programs. In these countries, the written text remains mostly uniform while regional dialects in the spoken form can be mutually unintelligible. Therefore, subtitling offers a distinct advantage to aid comprehension. With subtitles, programs in Putonghua, the standard Mandarin, or any dialect can be understood by viewers unfamiliar with it.

On-screen subtitles as seen in Japanese variety television shows are more for decorative purpose, something that is not seen in television in Europe and the Americas. Some shows even place sound effects over those subtitles. This practice of subtitling has been spread to neighboring countries including South Korea and Taiwan. ATV in Hong Kong once practiced this style of decorative subtitles on its variety shows when it was owned by Want Want Holdings in Taiwan (which also owns CTV and CTI).

South Asia

In India, Same Language Subtitling (SLS) is common for films and music videos. SLS refers to the idea of subtitling in the same language as the audio. SLS is highlighted karaoke-style, that is, to speech. The idea of SLS was initiated to shore up literacy rates as SLS makes reading practice an incidental, automatic, and subconscious part of popular TV entertainment. This idea was well received by the Government of India which now uses SLS on several national channels, including Doordarshan.

Translation

This section does not cite any sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (November 2016) (Learn how and when to remove this template message)

Translation basically means the conversion of one language into another language in written or spoken form. The process of translation requires a translator e.g. Google Translate, Microsoft Translator. Subtitles can be used to translate dialog from a foreign language into the native language of the audience. It is not only the quickest and cheapest method of translating content but is also usually preferred as it is possible for the audience to hear the original dialog and voices of the actors.

Subtitle translation can be different from the translation of the written text. Usually, during the process of creating subtitles for a film or television program, the picture and each sentence of the audio is analyzed by the subtitle translator; also, the subtitle translator may or may not have access to a written transcript of the dialog. Especially in the field of commercial subtitles, the subtitle translator often interprets what is meant, rather than translating the manner in which the dialog is stated; that is, the meaning is more important than the form—the audience does not always appreciate this, as it can be frustrating for people who are familiar with some of the spoken language; spoken language may contain verbal padding or culturally implied meanings that cannot be conveyed in the written subtitles. Also, the subtitle translator may also condense the dialog to achieve an acceptable reading speed, whereby the purpose is more important than form.

Especially in fansubs, the subtitle translator may translate both form and meaning. The subtitle translator may also choose to display a note in the subtitles, usually in parentheses (“(” and “)”), or as a separate block of on-screen text—this allows the subtitle translator to preserve form and achieve an acceptable reading speed; that is, the subtitle translator may leave a note on the screen, even after the character has finished speaking, to both preserve form and facilitate understanding. For example, the Japanese language has multiple first-person pronouns (see Japanese pronouns) and each pronoun is associated with a different degree of politeness. In order to compensate during the English translation process, the subtitle translator may reformulate the sentence, add appropriate words, and/or use notes.

Subtitling

Real-time

Real-time translation subtitling usually involves an interpreter and a stenographer working concurrently, whereby the former quickly translates to the dialog while the latter types; this form of subtitling is rare. The unavoidable delay, typing errors, lack of editing, and high cost mean that real-time translation subtitling is in low demand. Allowing the interpreter to directly speak to the viewers is usually both cheaper and quicker; however, the translation is not accessible to people who are deaf and hard-of-hearing.

Offline

Some subtitlers purposely provide edited subtitles or captions to match the needs of their audience, for learners of the spoken dialog as a second or foreign language, visual learners, beginning readers who are deaf or hard of hearing and for people with learning and/or mental disabilities. For example, for many of its films and television programs, PBS displays standard captions representing speech the program audio, word-for-word, if the viewer selects "CC1" by using the television remote control or on-screen menu; however, they also provide edited captions to present simplified sentences at a slower rate, if the viewer selects "CC2". Programs with a diverse audience also often have captions in another language. This is common with popular Latin American soap operas in Spanish. Since CC1 and CC2 share bandwidth, the U.S. Federal Communications Commission (FCC) recommends translation subtitles be placed in CC3. CC4, which shares bandwidth with CC3, is also available, but programs seldom use it.

Subtitles vs. dubbing and lectoring

The two alternative methods of 'translating' films in a foreign language are dubbing, in which other actors record over the voices of the original actors in a different language, and lectoring, a form of voice-over for fictional material where a narrator tells the audience what the actors are saying while their voices can be heard in the background. Lectoring is common for television in Russia, Poland, and a few other East European countries, while cinemas in these countries commonly show films dubbed or subtitled.

The preference for dubbing or subtitling in various countries is largely based on decisions taken in the late 1920s and early 1930s. With the arrival of sound film, the film importers in Germany, Italy, France, and Spain decided to dub the foreign voices, while the rest of Europe elected to display the dialog as translated subtitles. The choice was largely due to financial reasons (subtitling is more economical and quicker than dubbing), but during the 1930s it also became a political preference in Germany, Italy, and Spain; an expedient form of censorship that ensured that foreign views and ideas could be stopped from reaching the local audience, as dubbing makes it possible to create a dialogue which is totally different from the original. In larger German cities a few "special cinemas" use subtitling instead of dubbing.

Dubbing is still the norm and favored form in these four countries, but the proportion of subtitling is slowly growing, mainly to save cost and turnaround-time, but also due to a growing acceptance among younger generations, who are better readers and increasingly have a basic knowledge of English (the dominant language in film and TV) and thus prefer to hear the original dialogue.

Nevertheless, in Spain, for example, only public TV channels show subtitled foreign films, usually late at night. It is extremely rare that any Spanish TV channel shows subtitled versions of TV programs, series, or documentaries. With the advent of digital land broadcast TV, it has become common practice in Spain to provide optional audio and subtitle streams that allow watching dubbed programs with the original audio and subtitles. In addition, only a small proportion of cinemas show subtitled films. Films with dialogue in Galician, Catalan, or Basque are always dubbed, not subtitled when they are shown in the rest of the country. Some non-Spanish-speaking TV stations subtitle interviews in Spanish; others do not.

In many Latin American countries, local network television will show dubbed versions of English-language programs and movies, while cable stations (often international) more commonly broadcast subtitled material. Preference for subtitles or dubbing varies according to individual taste and reading ability, and theaters may order two prints of the most popular films, allowing moviegoers to choose between dubbing or subtitles. Animation and children's programming, however, is nearly universally dubbed, as in other regions.

Since the introduction of the DVD and, later, the Blu-ray Disc, some high-budget films include the simultaneous option of both subtitles and/or dubbing. Often in such cases, the translations are made separately, rather than the subtitles being a verbatim transcript of the dubbed scenes of the film. While this allows for the smoothest possible flow of the subtitles, it can be frustrating for someone attempting to learn a foreign language.

In the traditional subtitling countries, dubbing is generally regarded as something strange and unnatural and is only used for animated films and TV programs intended for pre-school children. As animated films are "dubbed" even in their original language and ambient noise and effects are usually recorded on a separate soundtrack, dubbing a low quality production into a second language produces little or no noticeable effect on the viewing experience. In dubbed live-action television or film, however, viewers are often distracted by the fact that the audio does not match the actors' lip movements. Furthermore, the dubbed voices may seem detached, inappropriate for the character, or overly expressive, and some ambient sounds may not be transferred to the dubbed track, creating a less enjoyable viewing experience.

image
Subtitles
also known as
  • Caption
resources
  • How to use SRT files for displaying subtitles during video playback on microsoft.com
  • Everything there is to know about subtitling & how it is done on matinee.co.uk
  • How to create subtitles – a Step-by-Step Guide on amberscript.com
  • Top 14 Subtitle Download Websites on movavi.com
source
Adapted from content published on wikipedia.org
credit
  • Image by Brian Clifford: public domain — from flic.kr
Last modified on May 31, 2021, 12:12 am
Videocide.com is a service provided by Codecide, a company located in Chicago, IL USA.
magnifier