Early video game technology and computer chip music
At the time video games had emerged as a popular form of entertainment in the late 1970s, music was stored on physical medium in analog waveforms such as compact cassettes and phonograph records. Such components were expensive and prone to breakage under heavy use making them less than ideal for use in an arcade cabinet, though in rare cases, they were used (Journey). A more affordable method of having music in a video game was to use digital means, where a specific computer chip would change electrical impulses from computer code into analog sound waves on the fly for output on a speaker. Sound effects for the games were also generated in this fashion. An early example of such an approach to video game music was the opening chiptune in Tomohiro Nishikado's Gun Fight (1975).
While this allowed for inclusion of music in early arcade video games, it was usually monophonic, looped or used sparingly between stages or at the start of a new game, such as the Namco titles Pac-Man (1980) composed by Toshio Kai or Pole Position (1982) composed by Nobuyuki Ohnogi. The first game to use a continuous background soundtrack was Tomohiro Nishikado's Space Invaders, released by Taito in 1978. It had four descending chromatic bass notes repeating in a loop, though it was dynamic and interacted with the player, increasing pace as the enemies descended on the player. The first video game to feature continuous, melodic background music was Rally-X, released by Namco in 1980, featuring a simple tune that repeats continuously during gameplay. The decision to include any music into a video game meant that at some point it would have to be transcribed into computer code by a programmer, whether or not the programmer had musical experience. Some music was original, some was public domain music such as folk songs. Sound capabilities were limited; the popular Atari 2600 home system, for example, was capable of generating only two tones, or "notes", at a time.
As advances were made in silicon technology and costs fell, a definitively new generation of arcade machines and home consoles allowed for great changes in accompanying music. In arcades, machines based on the Motorola 68000 CPU and accompanying various Yamaha YM programmable sound generator sound chips allowed for several more tones or "channels" of sound, sometimes eight or more. The earliest known example of this was Sega's 1980 arcade game Carnival, which used an AY-3-8910 chip to create an electronic rendition of the classical 1889 composition "Over The Waves" by Juventino Rosas.
Konami's 1981 arcade game Frogger introduced a dynamic approach to video game music, using at least eleven different gameplay tracks, in addition to level-starting and game over themes, which change according to the player's actions. This was further improved upon by Namco's 1982 arcade game Dig Dug, where the music stopped when the player stopped moving. Dig Dug was composed by Yuriko Keino, who also composed the music for other Namco games such as Xevious (1982) and Phozon (1983). Sega's 1982 arcade game Super Locomotive featured a chiptune rendition of Yellow Magic Orchestra's "Rydeen" (1979); several later computer games also covered the song, such as Trooper Truck (1983) by Rabbit Software as well as Daley Thompson's Decathlon (1984) and Stryker's Run (1986) composed by Martin Galway.
Home console systems also had a comparable upgrade in sound ability beginning with the ColecoVision in 1982 capable of four channels. However, more notable was the Japanese release of the Famicom in 1983 which was later released in the US as the Nintendo Entertainment System in 1985. It was capable of five channels, one being capable of simple PCM sampled sound. The home computer Commodore 64 released in 1982 was capable of early forms of filtering effects, different types of waveforms and eventually the undocumented ability to play 4-bit samples on a pseudo fourth sound channel. Its comparatively low cost made it a popular alternative to other home computers, as well as its ability to use a TV for an affordable display monitor.
Approach to game music development in this time period usually involved using simple tone generation and/or frequency modulation synthesis to simulate instruments for melodies, and use of a "noise channel" for simulating percussive noises. Early use of PCM samples in this era was limited to short sound bites (Monopoly), or as an alternate for percussion sounds (Super Mario Bros. 3). The music on home consoles often had to share the available channels with other sound effects. For example, if a laser beam was fired by a spaceship, and the laser used a 1400 Hz square wave, then the square wave channel that was in use by music would stop playing music and start playing the sound effect.
The mid-to-late 1980s software releases for these platforms had music developed by more people with greater musical experience than before. Quality of composition improved noticeably, and evidence of the popularity of music of this time period remains even today. Composers who made a name for themselves with their software include Koichi Sugiyama (Dragon Quest), Nobuo Uematsu (Final Fantasy), Rob Hubbard (Monty On the Run, International Karate), Koji Kondo (Super Mario Bros., The Legend of Zelda), Miki Higashino (Gradius, Yie-Ar Kung Fu, Teenage Mutant Ninja Turtles), Hiroshi Kawaguchi (Space Harrier, Hang-On, Out Run), Hirokazu Tanaka (Metroid, Kid Icarus, EarthBound), Martin Galway (Daley Thompson's Decathlon, Stryker's Run, Times of Lore), Yuzo Koshiro (Dragon Slayer, Ys, Shinobi, ActRaiser, Streets of Rage), Mieko Ishikawa (Dragon Slayer, Ys), and Ryu Umemoto (visual novels, shoot 'em ups). By the late 1980s, video game music was being sold as cassette tape soundtracks in Japan, inspiring American companies such as Sierra, Cinemaware and Interplay to give more serious attention to video game music by 1988. The Golden Joystick Awards introduced a category for Best Soundtrack of the Year in 1986, won by Sanxion.
Some games for cartridge systems have been sold with extra audio hardware on board, including Pitfall II for the Atari 2600 and several late Famicom titles. These chips add to the existing sound capabilities.
Early digital synthesis and sampling
From around 1980, some arcade games began taking steps toward digitized, or sampled, sounds. Namco's 1980 arcade game Rally-X was the first known game to use a digital-to-analog converter (DAC) to produce sampled tones instead of a tone generator. That same year, the first known video game to feature speech synthesis was also released: Sunsoft's shoot 'em up game Stratovox. Around the same time, the introduction of frequency modulation synthesis (FM synthesis), first commercially released by Yamaha for their digital synthesizers and FM sound chips, allowed the tones to be manipulated to have different sound characteristics, where before the tone generated by the chip was limited to the design of the chip itself. Konami's 1983 arcade game Gyruss used five synthesis sound chips along with a DAC, which were somewhat used to create an electronic version of J. S. Bach's Toccata and Fugue in D minor.
Beyond arcade games, significant improvements to personal computer game music were made possible with the introduction of digital FM synth boards, which Yamaha released for Japanese computers such as the NEC PC-8801 and PC-9801 in the early 1980s, and by the mid-1980s, the PC-8801 and FM-7 had built-in FM sound. This allowed computer game music to have greater complexity than the simplistic beeps from internal speakers. These FM synth boards produced a "warm and pleasant sound" that musicians such as Yuzo Koshiro and Takeshi Abo utilized to produce music that is still highly regarded within the chiptune community. The widespread adoption of FM synthesis by consoles would later be one of the major advances of the 16-bit era, by which time 16-bit arcade machines were using multiple FM synthesis chips.
One of the earliest home computers to make use of digital signal processing in the form of sampling was the Commodore Amiga in 1985. The computer's sound chip featured four independent 8-bit digital-to-analog converters. Developers could use this platform to take samples of a music performance, sometimes just a single note long, and play it back through the computer's sound chip from memory. This differed from Rally-X in that its hardware DAC was used to play back simple waveform samples, and a sampled sound allowed for a complexity and authenticity of a real instrument that an FM simulation could not offer. For its role in being one of the first and affordable, the Amiga would remain a staple tool of early sequenced music composing, especially in Europe.
The Amiga offered these features before most other competing home computer platforms though the Macintosh which had been introduced a year earlier had similar capabilities. The Amiga's main rival, the Atari ST, sourced the Yamaha YM2149 Programmable Sound Generator (PSG). Compared to the in-house designed Amiga sound engine, the PSG could only handle 1 channel of sampled sound, and needed the computer's CPU to process the data for it. This made it impractical for game development use until 1989 with the release of the Atari STE which used DMA techniques to play back PCM samples at up to 50 kHz. The ST however remained relevant as it was equipped with a MIDI controller and external ports. It became the choice of by many professional musicians as a MIDI programming device.
IBM PC clones in 1985 would not see any significant development in multimedia abilities for a few more years, and sampling would not become popular in other video game systems for several years. Though sampling had the potential to produce much more realistic sounds, each sample required much more data in memory. This was at a time when all memory, solid state (ROM cartridge), magnetic (floppy disk) or otherwise was still very costly per kilobyte. Sequenced soundchip generated music on the other hand was generated with a few lines of comparatively simple code and took up far less precious memory.
Arcade systems pushed game music forward in 1984 with the introduction of FM (Frequency Modulation) synthesis, providing more realistic sounds than previous PSGs. The first such game, Marble Madness used the Yamaha YM2151 FM synthesis chip.
As home consoles moved into the fourth generation, or 16-bit era, the hybrid approach (sampled and tone) to music composing continued to be used. The Sega Genesis offered advanced graphics over the NES and improved sound synthesis features (also using a Yamaha chip, the YM2612), but largely held the same approach to sound design. Ten channels in total for tone generation with one for PCM samples were available in stereo instead of the NES's five channels in mono, one for PCM. As before, it was often used for percussion samples. The Genesis did not support 16-bit sampled sounds. Despite the additional tone channels, writing music still posed a challenge to traditional composers and it forced much more imaginative use of the FM synthesizer to create an enjoyable listening experience. The composer Yuzo Koshiro utilized the Genesis hardware effectively to produce "progressive, catchy, techno-style compositions far more advanced than what players were used to" for games such as The Revenge of Shinobi (1989) and the Streets of Rage series, setting a "new high watermark for what music in games could sound like." The soundtrack for Streets of Rage 2 (1992) in particular is considered "revolutionary" and "ahead of its time" for its blend of house music with "dirty" electro basslines and "trancey electronic textures" that "would feel as comfortable in a nightclub as a video game." Another important FM synth composer was the late Ryu Umemoto, who composed music for many visual novels and shoot 'em ups during the 1990s.
As cost of magnetic memory declined in the form of diskettes, the evolution of video game music on the Amiga, and some years later game music development in general, shifted to sampling in some form. It took some years before Amiga game designers learned to wholly use digitized sound effects in music (an early exception case was the title music of text adventure game The Pawn, 1986). By this time, computer and game music had already begun to form its own identity, and thus many music makers intentionally tried to produce music that sounded like that heard on the Commodore 64 and NES, which resulted in the chiptune genre.
The release of a freely-distributed Amiga program named Soundtracker by Karsten Obarski in 1987 started the era of MOD-format which made it easy for anyone to produce music based on digitized samples. Module files were made with programs called "trackers" after Obarski's Soundtracker. This MOD/tracker tradition continued with PC computers in the 1990s. Examples of Amiga games using digitized instrument samples include David Whittaker's soundtrack for Shadow of the Beast, Chris Hülsbeck's soundtrack for Turrican 2 and Matt Furniss's tunes for Laser Squad. Richard Joseph also composed some theme songs featuring vocals and lyrics for games by Sensible Software most famous being Cannon Fodder (1993) with a song "War Has Never Been So Much Fun" and Sensible World of Soccer (1994) with a song "Goal Scoring Superstar Hero". These songs used long vocal samples.
A similar approach to sound and music developments had become common in the arcades by this time and had been used in many arcade system boards since the mid-1980s. This was further popularized in the early 1990s by games like Street Fighter II (1991) on the CPS-1, which used voice samples extensively along with sampled sound effects and percussion. Neo Geo's MVS system also carried powerful sound development which often included surround sound.
The Super NES
(1990) brought digitized sound to console games.
The evolution also carried into home console video games, such as the release of the Super Famicom in 1990, and its US/EU version Super NES in 1991. It sported a specialized custom Sony chip for both the sound generation and for special hardware DSP. It was capable of eight channels of sampled sounds at up to 16-bit resolution, had a wide selection of DSP effects, including a type of ADSR usually seen in high end synthesizers of the time, and full stereo sound. This allowed experimentation with applied acoustics in video games, such as musical acoustics (early games like Super Castlevania IV, F-Zero, Final Fantasy IV, Gradius III, and later games like Chrono Trigger), directional (Star Fox) and spatial acoustics (Dolby Pro Logic was used in some games, like King Arthur's World and Jurassic Park), as well as environmental and architectural acoustics (Zelda III, Secret of Evermore). Many games also made heavy use of the high quality sample playback capabilities (Super Star Wars, Tales of Phantasia). The only real limitation to this powerful setup was the still-costly solid state memory. Other consoles of the generation could boast similar abilities yet did not have the same circulation levels as the Super NES. The Neo-Geo home system was capable of the same powerful sample processing as its arcade counterpart, but was several times the cost of a Super NES. The Sega CD (the Mega CD outside North America) hardware upgrade to the Mega Drive (Genesis in the US) offered multiple PCM channels, but they were often passed over instead to use its capabilities with the CD-ROM itself.
Popularity of the Super NES and its software remained limited to regions where NTSC television was the broadcast standard. Partly because of the difference in frame rates of PAL broadcast equipment, many titles released were never redesigned to play appropriately and ran much slower than had been intended, or were never released. This showed a divergence in popular video game music between PAL and NTSC countries that still shows to this day. This divergence would be lessened as the fifth generation of home consoles launched globally, and as Commodore began to take a back seat to general purpose PCs and Macs for developing and gaming.
Though the Mega CD/Sega CD, and to a greater extent the PC Engine in Japan, would give gamers a preview of the direction video game music would take in streaming music, the use of both sampled and sequenced music continues in game consoles even today. The huge data storage benefit of optical media would be coupled with progressively more powerful audio generation hardware and higher quality samples in the Fifth Generation. In 1994, the CD-ROM equipped PlayStation supported 24 channels of 16-bit samples of up to 44.1 kHz sample rate, samples equal to CD audio in quality. It also sported a few hardware DSP effects like reverb. Many Square titles continued to use sequenced music, such as Final Fantasy VII, Legend of Mana, and Final Fantasy Tactics. The Sega Saturn also with a CD drive supported 32 channels of PCM at the same resolution as the original PlayStation. In 1996, the Nintendo 64, still using a solid state cartridge, actually supported an integrated and scalable sound system that was potentially capable of 100 channels of PCM, and an improved sample rate of 48 kHz. Games for the N64, because of the cost of the solid state memory, typically had samples of lesser quality than the other two however, and music tended to be simpler in construct.
The more dominant approach for games based on CDs, however, was shifting toward streaming audio.
MIDI on the PC
The first developers of IBM PC
computers neglected audio capabilities (first IBM model, 1981).
In the same timeframe of the late 1980s to mid-1990s, the IBM PC clones using the x86 architecture became more ubiquitous, yet had a very different path in sound design than other PCs and consoles. Early PC gaming was limited to the PC speaker, and some proprietary standards such as the IBM PCjr 3-voice chip. While sampled sound could be achieved on the PC speaker using pulse width modulation, doing so required a significant proportion of the available processor power, rendering its use in games rare.
With the increase of x86 PCs in the market, there was a vacuum in sound performance in home computing that expansion cards attempted to fill. The first two recognizable standards were the Roland MT-32, followed by the AdLib sound card. Roland's solution was driven by MIDI sequencing using advanced LA synthesizers. This made it the first choice for game developers to produce upon, but its higher cost as an end-user solution made it prohibitive. The AdLib used a low-cost FM synthesis chip from Yamaha, and many boards could operate compatibly using the MIDI standard.
The AdLib card was usurped in 1989 by Creative's Sound Blaster, which used the same Yamaha FM chip in the AdLib, for compatibility, but also added 8-bit 22.05 kHz (later 44.1 kHz) digital audio recording and playback of a single stereo channel. As an affordable end-user product, the Sound Blaster constituted the core sound technology of the early 1990s; a combination of a simple FM engine that supported MIDI, and a DAC engine of one or more streams. Only a minority of developers ever used Amiga-style tracker formats in commercial PC games, (Unreal) typically preferring to use the MT-32 or AdLib/SB-compatible devices. As general purpose PCs using x86 became more ubiquitous than the other PC platforms, developers drew their focus towards that platform.
The last major development before streaming music came in 1992: Roland Corporation released the first General MIDI card, the sample-based SCC-1, an add-in card version of the
SC-55 desktop MIDI module. The comparative quality of the samples spurred similar offerings from Soundblaster, but costs for both products were still high. Both companies offered 'daughterboards' with sample-based synthesizers that could be later added to a less expensive soundcard (which only had a DAC and a MIDI controller) to give it the features of a fully integrated card.
Unlike the standards of Amiga or Atari, a PC using x86 even then could be using a broad mix of hardware. Developers increasingly used MIDI sequences: instead of writing soundtrack data for each type of soundcard, they generally wrote a fully featured data set for the Roland application that would be compatible with lesser featured equipment so long as it had a MIDI controller to run the sequence. However, different products used different sounds attached to their MIDI controllers. Some tied into the Yamaha FM chip to simulate instruments, some daughterboards of samples had very different sound qualities; meaning that no single sequence performance would be accurate to every other General MIDI device.
All of these considerations in the products reflected the high cost of memory storage which rapidly declined with the optical CD format.
Pre-recorded and streaming music
Taking entirely pre-recorded music had many advantages over sequencing for sound quality. Music could be produced freely with any kind and number of instruments, allowing developers to simply record one track to be played back during the game. Quality was only limited by the effort put into mastering the track itself. Memory space costs that was previously a concern was somewhat addressed with optical media becoming the dominant media for software games. CD quality audio allowed for music and voice that had the potential to be truly indistinguishable from any other source or genre of music.
In fourth generation home video games and PCs this was limited to playing a Mixed Mode CD audio track from a CD while the game was in play (such as Sonic CD). The earliest examples of Mixed Mode CD audio in video games include the TurboGrafx-CD RPG franchises Tengai Makyō, composed by Ryuichi Sakamoto from 1989, and the Ys series, composed by Yuzo Koshiro and Mieko Ishikawa and arranged by Ryo Yonemitsu in 1989. The Ys soundtracks, particularly Ys I & II (1989), are still regarded as some of the most influential video game music ever composed.
However, there were several disadvantages of regular CD-audio. Optical drive technology was still limited in spindle speed, so playing an audio track from the game CD meant that the system could not access data again until it stopped the track from playing. Looping, the most common form of game music, was also problem as when the laser reached the end of a track, it had to move itself back to the beginning to start reading again causing an audible gap in playback.
To address these drawbacks, some PC game developers designed their own container formats in house, for each application in some cases, to stream compressed audio. This would cut back on memory used for music on the CD, allowed for much lower latency and seek time when finding and starting to play music, and also allowed for much smoother looping due to being able to buffer the data. A minor drawback was that use of compressed audio meant it had to be decompressed which put load on the CPU of a system. As computing power increased, this load became minimal, and in some cases dedicated chips in a computer (such as a sound card) would actually handle all the decompressing.
Fifth generation home console systems also developed specialised streaming formats and containers for compressed audio playback. Games would take full advantage of this ability, sometimes with highly praised results (Castlevania: Symphony of the Night). Games ported from arcade machines, which continued to use FM synthesis, often saw superior pre-recorded music streams on their home console counterparts (Street Fighter Alpha 2). Even though the game systems were capable of "CD quality" sound, these compressed audio tracks were not true "CD quality." Many of them had lower sampling rates, but not so significant that most consumers would notice. Using a compressed stream allowed game designers to play back streamed music and still be able to access other data on the disc without interruption of the music, at the cost of CPU power used to render the audio stream. Manipulating the stream any further would require a far more significant level of CPU power available in the 5th generation.
Some games, such as the Wipeout series, continued to use full Mixed Mode CD audio for their soundtracks.
This overall freedom offered to music composers gave video game music the equal footing with other popular music it had lacked. A musician could now, with no need to learn about programming or the game architecture itself, independently produce the music to their satisfaction. This flexibility would be exercised as popular mainstream musicians would be using their talents for video games specifically. An early example is Way of the Warrior on the 3DO, with music by White Zombie. A more well-known example is Trent Reznor's score for Quake.
An alternate approach, as with the TMNT arcade, was to take pre-existing music not written exclusively for the game and use it in the game. The game Star Wars: X-Wing vs. TIE Fighter and subsequent Star Wars games took music composed by John Williams for the Star Wars films of the 1970s and 1980s and used it for the game soundtracks.
Both using new music streams made specifically for the game, and using previously released/recorded music streams are common approaches for developing sound tracks to this day. It is common for X-games sports-based video games to come with some popular artists recent releases (SSX, Tony Hawk, Initial D), as well as any game with heavy cultural demographic theme that has tie-in to music (Need For Speed: Underground, Gran Turismo, and Grand Theft Auto). Sometimes a hybrid of the two are used, such as in Dance Dance Revolution.
Sequencing samples continue to be used in modern gaming for many uses, mostly RPGs. Sometimes a cross between sequencing samples, and streaming music is used. Games such as Republic: The Revolution (music composed by James Hannigan) and Command & Conquer: Generals (music composed by Bill Brown) have utilised sophisticated systems governing the flow of incidental music by stringing together short phrases based on the action on screen and the player's most recent choices (see dynamic music). Other games dynamically mixed the sound on the game based on cues of the game environment.
As processing power increased dramatically in the 6th generation of home consoles, it became possible to apply special effects in realtime to streamed audio. In SSX, a recent video game series, if a snowboarder takes to the air after jumping from a ramp, the music softens or muffles a bit, and the ambient noise of wind and air blowing becomes louder to emphasize being airborne. When the snowboarder lands, the music resumes regular playback until its next "cue". The LucasArts company pioneered this interactive music technique with their iMUSE system, used in their early adventure games and the Star Wars flight simulators Star Wars: X-Wing and Star Wars: TIE Fighter. Action games such as these will change dynamically to match the amount of danger. Stealth-based games will sometimes rely on such music, either by handling streams differently, or dynamically changing the composition of a sequenced soundtrack.
Being able to play one's own music during a game in the past usually meant turning down the game audio and using an alternative music player. Some early exceptions were possible on PC/Windows gaming in which it was possible to independently adjust game audio while playing music with a separate program running in the background. Some PC games, such as Quake, play music from the CD while retrieving game data exclusively from the hard disk, thereby allowing the game CD to be swapped for any music CD. The first PC game to introduce in-game support for custom soundtracks was Lionhead Studio's Black and White. The 2001 game included an in-game interface for Winamp that enabled the players to play audio tracks from their own playlists. In addition, this would sometimes trigger various reactions from the player's Creature, like dancing or laughing.
Some PlayStation games supported this by swapping the game CD with a music CD, although when the game needed data, players had to swap the CDs again. One of the earliest games, Ridge Racer, was loaded entirely into RAM, letting the player insert a music CD to provide a soundtrack throughout the entirety of the gameplay. In Vib Ribbon, this became a gameplay feature, with the game generating levels based entirely on the music on whatever CD the player inserted.
Microsoft's Xbox allowed music to be copied from a CD onto its internal hard drive, to be used as a "Custom Soundtrack", if enabled by the game developer. The feature carried over into the Xbox 360 where it became supported by the system software and could be enabled at any point. The Wii is also able to play custom soundtracks if it is enabled by the game (Excite Truck, Endless Ocean). The PlayStation Portable can, in games like Need for Speed Carbon: Own the City and FIFA 08, play music from a Memory Stick.
The PlayStation 3 has the ability to utilize custom soundtracks in games using music saved on the hard drive, however few game developers used this function. MLB 08: The Show, released in 2008, has a My MLB sound track feature which allows the user to play music tracks of their choice saved on the hard drive of their PS3, rather than the preprogrammed tracks incorporated into the game by the developer. An update to Wipeout HD, released on the PlayStation Network, was made to also incorporate this feature.
In the video game Audiosurf, custom soundtracks are the main aspect of the game. Users have to pick a music file to be analyzed. The game will generate a race track based on tempo, pitch and complexity of the sound. The user will then race on this track, synchronized with the music.
Games in the Grand Theft Auto series have supported custom soundtracks, using them as a separate in-game radio station. The feature was primarily exclusive to PC versions, and was adopted to a limited degree on console platforms. On a PC, inserting custom music into the stations is done by placing music files into a designated folder. For the Xbox version, a CD must be installed into the console's hard drive. For the iPhone version of Grand Theft Auto: Chinatown Wars, players create an iTunes playlist which is then played by the game.
Forza Horizon 3 used a similar technology of custom soundtracks with the help of Groove Music.
Developments in the 2000's
The Xbox 360 supports Dolby Digital software, sampling and playback rate of 16-bit @ 48 kHz (internal; with 24-bit hardware D/A converters), hardware codec streaming, and potential of 256 audio simultaneous channels. While powerful and flexible, none of these features represent any major change in how game music is made from the last generation of console systems. PCs continue to rely on third-party devices for in-game sound reproduction, and SoundBlaster is largely the only major player in the entertainment audio expansion card business.
The PlayStation 3 handles multiple types of surround sound technology, including Dolby TrueHD and DTS-HD, with up to 7.1 channels, and with sampling rates of up to 192 kHz.
Nintendo's Wii console shares many audio components with the Nintendo GameCube from the previous generation, including Dolby Pro Logic II. These features are extensions of technology already currently in use.
The game developer of today has many choices on how to develop music. More likely, changes in video game music creation will have very little to do with technology and more to do with other factors of game development as a business whole. Video game music has diversified much to the point where scores for games can be presented with a full orchestra or simple 8/16-bit chiptunes. This degree of freedom has made the creative possibilities of video game music limitless to developers. As sales of video game music diverged from the game itself in the West (compared to Japan where game music CDs had been selling for years), business elements also wield a new level of influence. Music from outside the game developer's immediate employment, such as music composers and pop artists, have been contracted to produce game music just as they would for a theatrical movie. Many other factors have growing influence, such as editing for content, politics on some level of the development, and executive input.