top of page

BWW 2022 Final Blog Post - Evaluation

  • Writer: Petra Mickey
    Petra Mickey
  • Mar 10, 2022
  • 27 min read

Updated: Mar 14, 2022


DISCLAIMER: If you want to see the Demo Video (with the sounds and music), then it is on a former blog post, called "The Midway Point". This post is about the Soundscape and the Scores and sounds that happened outside of the video and the Evaluation.


SOUNDSCAPE:


After finishing the Foley and the original level music, as well as main menu music, for the game, I had to redo the Soundscape. Due to the soundscape being a sample from Spitfire Labs, I had very little control over the sounds and layers that go into the sample. This became a problem, as the samples, whilst sounding great in their own right, were of a Tundra/forested setting, which meant that they contained loud amounts of bird song, which was the dominant sound in the sample. The video, however shows an open "field" style premise, and likewise, the game calls for a mediterranean "Greek" setting, which does not contain tundras, or large forests mostly. This meant that the birdsong in the ambience needed to be much more subtle and laid back, with most of the dominant sound being wind noise, open ambience and perhaps the sound of insects/grass/crickets. Due to the sample being pre-made, I could not turn down the sounds of the birds, so I had to use a different sample.


At first, I tried to make my own sample, using the wind sounds from Storm Eunice, and my cell phone's microphone. However this was not as easy, due to the loud and strong winds clipping the mic, as well as due to the overall ambient noises that were present both on campus and in town, such as the talking of people, and the sounds of cars, that quickly ruined the samples. I persevered and tried to use the Cock Blocker noise gate, from Spectre Digital, to eliminate some of the noise, however, this in turn cut out quieter and more subtle parts of the wind noise, and let through only the clipping parts, and likewise, still let through the louder surplus noises, like people screaming/talking loudly, or loud car door slams, and engines. From the seven minute recording of different settings in Lichfield, during the storm, I ended up having only a few bars worth of actually usable recording, which was not enough and clearly sounded like a repeating loop.


So I took to youtube and found this video: https://www.youtube.com/watch?v=yEn8_X7Ei3A, which sounded decent, and added some post production effects, such as some slight EQ, and layered it with LABS Birdsong sounds, which I randomly dotted in places, every few bars or so, which allowed the sound to sound somewhat random and more natural, however I ensured to make the birdsong quieter and less prominent, to ensure it stayed true to the setting. I then sent both of these samples through a bus, which contained Logic's Space designer, set to have an Open Ambience impulse response. This would allow both sounds to seemingly come from the same source, essentially sounding like they are both part of one sound, as the both shared the same room Impulse Response.


Bussing is when a track(s)' output(s) are sent to an auxiliary/separate audio track. This is a useful technique in organising a project, as it allows certain tracks to be impacted together by, just one effect, for example an EQ or Compressor, by sending them all through that one same bus, with said compressor, as opposed to having to EQ/Compress/tend to every single track separately, which leaves more room for mistakes to happen, especially when dealing with loads of sounds that you want to sound similar or the same, such as a double tracked guitar or string instrument. Likewise, like in this scenario, Busses help in sending the outputs of different tracks through one effect, like a Room Simulation, which make them all sound like they're playing together in one room, which can be especially beneficial in orchestral instruments, to make them all sound like they're playing together in a massive concert hall (Dixon, D. (2019)).




After doing that, I bounced out the soundscape, and added it to my sound library's soundscape folder. This soundscape, as mentioned before, will play in the background, when the player is moving about the level.


I also made some water sounds that would play in the background on levels that may be near the sea. They were recordings of the seaside, taken by my friend when she went to Blackpool, that I have taken from TickTock. For this I applied the same open ambience to them that I did to the wind sounds, so they sound more open, and less compressed (which was somewhat of an issue as they were recorded off a cell phone). Before applying the Space Designer however, I ran the sound through a channel EQ, which mostly removed resonances in the mids, with the very mid range having the nastiest resonance at around one kilohertz, that needed a big surgical cut with the EQ. The low end at 40 hertz to 100 hertz, also needed some shaping, with a small scoop to stop the low end of the sound from booming and muddying up.






SCORE:


I then went to compose some scores, for battle situations that may appear in the game. These would essentially be Leitmotifs that would play as a battle would commence. A Leitmotif is a scoring technique used in both films and games, that essentially consists of a short musical piece, that is assigned to a variable, such as a situation, a person, an object or an idea, and serves as a theme for that variable. For example the Darth Vader theme/Imperial March, used in Star Wars often plays when Darth Vader, or the Sith in general are mentioned or shown on screen. Likewise, in the context of a game, there are plenty of ever changing Leitmotifs in Heroes of Might and Magic V, where something as simple, as a couple of notes played on french horns can easily be associated with the Inferno theme. Leitmotifs are also prone to change as the story progresses. This means that while the general concept such as the melody and rhythm may remain somewhat the same, the, certain aspects of the Leitmotif, such as the timing and key signature may change, depending on the situation, as seen in the Dungeon theme in Heroes 5, with the string melody/riff that represents the faction, playing in a somewhat different theme and timing, when In Battle (Heckmann, C. (2020).


I Intended these leitmotifs to be Battle themes that play when the tournament starts and the players engage in combat. Both of these themes were inspired by the OST (Original Sound Track) of Heroes III and Heroes IV. I also created battle foley sounds as ordered by the game developer. These consisted of armour sounds, and weapon and power sounds.



FOLEY:


For these, I used a vast array of different samples, as well as some synthesiser sounds. For Armour and shield sounds, I used the clanging sounds of a hit doctor pepper can, pitched down and layered together, with pitched down sounds of loose coins being jolted around in my hand. I created these layers in CWITEC's TX16Wx Sampler.


I also created three different tracks for the armour sound. The layered sound seen above was used as the sound of the armour taking a direct hit, hence why additional attack sound was created with the sound of the Dr Pepper can being hit with a pen, to represent the initial hit, and then the jolted coins sample was played over that, to create the impression of chainmail/clanging armour.


I also made a cloth track for the armour, on a different track, that would essentially be the sound of the armour moving as the player moves. A cloth track is essentially the tracking of clothing sounds on characters with a close mic, to make the clothing's sound audible and clear in the film or game and to add realism (barbarabrownie (2015)). For the armour moving sounds, I rattled the loose coins in my hand in front of my Rode NT1-A condenser microphone. This sound, when performed rhythmically, emulated the sound of the character running/walking and sounded like chainmail or armour rattling, when pitched down, in TX16Wx.


I also made an armour equipping sound, which was inspired by the sounds of armour equipping sound in Minecraft. This sound consisted of the joins being moved around my palm, in front of the Rode mic. This was then slowed down, and pitched down in TX16Wx, to create the effect of armour being slid on and buckled up.


The reason I used the Rode NT1-A specifically, and not my computer microphone, is that the microphone gave better quality than my Mac's in built mic, obviously, but also because the microphone was a condenser, meaning it has an in built amplifier, and needs phantom power to work. This allows the mic to pick up very slight and quiet sounds, that a regular dynamic mic would not, such as the sound of the coins ringing and rubbing together ((Wreglesworth, R. (n.d.).





I made some Sword and Shield sounds as well. I already had the mace swing sound, that I made with the ESP synth engine, that I mentioned in the last post. For swords however, a metallic "shing" sound is needed, as well as the tinny clanging sounds of swords hitting armour or other swords. For this, I found the perfect sound by complete accident, whilst about to spread Nutella on waffles, I took out the knife from the drawer full of other knives and forks and metal objects, and found it makes the required sliding/metallic sound that I need for my project. So I took a large and long meat knife, which was long and more like a sword, and a fork, and placed the blade between the prongs of the fork and then pulled the knife through the fork at different speeds, in front of the Rode mic, to "practice" getting the velocity right so that the sound, is long enough to sound like a sword blade, but also long and fast enough to sound like an aggressively drawn sword.


Once I nailed the sound, and got a good take, I pitched down the sound in TX16Wx. The resulting sound was a gritty, metallic and nasty one. Whilst ensuring I keep things organic, and true to reality, I did not want for the player to have their eardrums pierced every time their character drew their sword, so I decided to run the sound through some EQ, which would turn down the nasty, resonant frequencies, somewhat. This was specifically required at around four kilohertz, where a piercing ringing frequency was present, and likewise some of the highs at 12.50 needed a slight, surgical cut, in order to turn them down in order to be bearable , but also not too much, to ensure it's still audible and remains organic and unfiltered, and "raw". I did also boost the presence, to add a shimmery and sparkly effect in the highs, by adding a high shelf, which boosted the other frequencies in the highs. I added a high pass from 300 hertz onward, to stop the muddy and whiny frequencies from ruining the sound, but left a small gap in the mids at 350 to around 500 hertz, so that some mid range still come through. I then took out some nasty, honky/squeaky frequencies in the mid range and high mids, at 600 hertz and at 1.50 kilohertz. I finally ensured the sound sustains some more, by adding some ChromaVerb, which would increase the decay of the sound.


This part of the process is called post production, and that is where the sound is fine tuned, with finding the perfect take, known as comping, where takes are compiled into one perfect take, however Joey Sturgis Tones also mentions that post production can also include mixing and mastering, where the sound is mixed - polished (or rolled in glitter if the original sound is really bad) (Joey Sturgis Tones. (n.d.)). By post production, I am referring the phase where the sound of the knife is made into one perfect take, using comping, which is where the few takes I did, of the knife being drawn through the fork, are compiled into one perfect take (nSync. (2016)). It is also where the sound was improved (or made listenable), by adding external effects such as EQ and compression, to help further shape the sound sonically and dynamically. The term sonically means, to do with sound, meaning individual frequencies, which are made sonically good in this case by being turned up or down (boosted or cut) by equalisers (Gemtracks Beats. (n.d.)). Dynamics, here refers to the amplitude (loudness) of a sound wave, which can be affected by the use of expanders and compressors, such as Supercharger, or in this case by changing how loudly I perform the sound (Farrant, D. (n.d.)). It is important to get the performance correct in the recording phase, both sonically, by ensuring it sounds good, and dynamically, by controlling the velocity of your performance, as it makes the mixing phase much easier and manageable.


For the clanging sounds, that occur when swords clash with solid objects, such as other swords or armour/weapons, I hit the side of the blade of the knife, with the fork, in front of the mic. I made many takes, which I compiled to form, three separate takes. The reason why I kept three separate takes is due to the fact that, while with something like a shield block or sword draw the action remains similar, when fighting with swords, weapons clash at different angles, speeds and velocities each time, which results in slightly varying sounds, so I essentially tried to get some nuance from this sound, by taking many takes, and compiling them into three individual tracks, that each sound different. This type of recording and sampling is called a "Round Robin", which essentially compromises of recording different takes of the same sound, to help bring out the nuance, when the sound is triggered, which in turn creates realism, as opposed to constantly hammering out the same sound each time, like a machine (Audio, S. (2020)).


Finally, for shield sounds, I used a similar technique, but hit the side, with a wooden implement instead, this sound would sound like a sword hitting a shield, and in order to get the sound to have increased decay, I ran that through some ChromaVerb.





For battles where players may stand in puddles of water, such as battles on the shore, when the tide comes in, I sampled some footsteps in the water sounds, by "walking" my hands, in a sink full of water, which I used my cell phone to record. This sound was very hard to achieve, due to the large amount of room noise, but I managed to get some different takes, which I did my best to de noise, with the Cock Blocker noise gate, by setting the threshold at around 8 o'clock, which would let through most of the sound, but also cut off the quieter distractions after the loud water steps, and then cut them up as separate steps, for the sound library.


Finally, I recorded some human grunts, by making different vocal effects into the Rode mic. For the attack sound, made to intimidate the enemy, or give power to an attack, I made a short, staccato scream with some overdrive. This scream, was somewhat similar to a stereotypical karate Kiai, which is a battle shout to give power to an attack (The Martial Way. (2016). This shout I performed a few times, in different takes, to get the perfect sound.


I also made a taking damage sound, which consisted of me screaming, with loads of overdrive. When making vocals, overdrive is a technique used by vocalists in rock and metal genres, to add rasp and distortion to the voice, to make it sound harsher and louder, as seen in the Slayer piece Angel Of Death. In this piece, Tom Araya delivers the vocals in an overdriven/belted state, that gives them power, and anger, and likewise, at the very start of the piece, uses overdrive and vocal fry, to create a loud and huge falsetto scream, with what is probably his head voice. This scream, is used to signify the anguish and pain of the victims of the Final Solution and the Angel Of Death, which is what the song is about, and it is this scream that I was inspired by, as it conveys pain well. The difference however Is that my scream, was a shorter one and more of a fry scream, mixed with overdrive, and instead of signifying intense pain and anguish, due to torture, like that of Tom Araya in Angel Of Death, my scream was used to signify a warrior being hit and injured in combat, which would usually be short, due to the combatants being out of breath and energy from the battle.


Overdrive is achieved by using a technique called glottal compression, which essentially is created by the usage of the diaphragm, to push air with more power, and get the vocal chords vibrating with more amplitude and power. The larynx becomes compressed (hence the name compression singing), and produce stronger output, and add overtones and a more wet distortion sound, and the false chords, also known as vestibular folds, form the rasp and grit by clashing together (Anon, (n.d.)). This functions in a similar way to how distortion is achieved on a guitar amplifier, with a tube screamer and/or preamp driving the tubes so much that they start to distort (Anon, (2018)).


I then recorded a slightly different taking a hit sound, which was more like a grunt, in the way it was performed, by just going "Ugh", in a cleaner way, than the prior scream, and making it sound more like a "being punched or hit with a blunt weapon" type of sound, as opposed to the prior scream which was meant to present a "hit and injure, or hit with a sharp/hot object" type of sound, where a person is more likely to scream, as opposed to grunt. This is why I named this sound "taking a hit", in the sound library, and the former sound is named "injury", essentially showing that they're both distinct sounds, uttered from the avatar, as a result of different forms of pain, such as being hit or being slashed, in battle.

to finish off all the required sounds, I made some power sounds. These sounds would be the sounds made by the different abilities, spells and powers in the game. I stuck to a few key sounds, and decided to use mostly synthesisers to make these sounds. The main reason as to why I used synthesisers for these, is because these are ethereal/otherworldly and godlike powers, which technically transcend the laws of physics, or at least are very supernatural. So in order to get these sounds, I used mostly synthesisers, also because I can make and sculpt those sounds very precisely with a synth, and once done, will probably be able to use the same sound again and again, without worrying that it will sound unrealistic, because these godlike powers, are beyond our mortal perception of reality (transcendent), and thus can sound like anything, so I wanted to make them sound as quirky, magical and weird as my skills allow me to.


I divided the sounds into the different possible power types that may come up in game. I made a defensive/shield spell power, which is seen in the demo video (shown in previous post), and surrounds the character with a golden/yellow aura for a short time. I also came up with sounds for a mystical/wind power, a storm/lightning power, an offensive/magic missile type spell and the ground impact, which is also heard and seen in demo video at the very beginning.


For the ground impact, I used the ESP synth engine, to make a sub bass hit, by lowering the frequency of the oscillators, but first, I started by turning down all sound waves, leaving only white noise. I then proceeded to pitch down the white noise, as seen before. Then, to make the sound's low end more audible, as well as making the whole sound seem gnarly and heavy, like a huge explosion or impact, I applied heavy, nasty, lo-fi sounding distortion, using a Boss Metal Zone simulation plugin, called Metal Area by Mercuriall Audio. This brings out the low end, due to how distortion functions. Distortion essentially squares off a wave at the peaks, creating a more squared off wave, which sounds more compressed due to lowering the dynamic range, but also brings out more frequencies. This made the sub bass more audible, but also made the whole sound sounding like a huge explosion. I also applied an equaliser, which applied a heavy surgical cut in the low mids of the sounds, to remove the boxy and muddy frequencies, and boosted the bass and lower low mids, and the highs and high mids, with a wide Q, to bring out all the frequencies, in the sound, and then applied a small high pass filter, at the very low end, since the frequencies over there are barely audible and these surplus frequencies will become problematic when mastering, as they will cause the output to clip (www.youtube.com. (n.d.)).





I then created a Shield/Defensive spell sound. For this I used the TableWarp2 synth engine, on sforzando. The sound that I was going for, was a swelling and "electric" sounding one, that would suggest that the spell is harnessing power and powering up, like a magical forcefield. For this, I set the first oscillator to make a Xor wave, which is a preset wave for the oscillator, and sounded very strange, thus I liked the sound. I sent this wave through an Envelope filter, which gave the sound some strange modulation, and a slow pulsating effect. An envelope filter alters the wave, and creates modulation effects and warps the signal, adding more characteristics to the sound (Hochstrasser Electronics. (n.d.)). I applied a secondary oscillator, with a resonant square wave, which gave it more of that pixelated/electronic sound, due to how square waves are perceived, due to the cultural significance of the sound, because of how they were first used to make sound in vintage video games, like those on Atari, or Gameboy colour. An example of a square synthesiser in action can be seen on Super Retro Thrash by Lich King, where the band used square synthesisers, to create chip tune/8-bit synths to remake their best pieces, from previous albums. I used this sound to create an electronic - resembling sound, to make the spell seem like it is powering up and gathering energy.


I then experimented with some samples, a bit more, and found a mysterious sounding sound, which I used for a mystic wind spell/mystic power, that may be used for a curse or some sort of similar effect, done by the player in combat. For this sound, I used Spitfire LABS' Organic Textures, in which I found a mysterious sound made by wind and thunder during a storm. This sound was very airy and mystic. The sound did have plenty of honky resonances in the mids, so I needed to filter those out, with two surgical cuts with an EQ, at 500 - around 800 hertz, and at 800 hertz to one kilohertz. I boosted some high mids at around two kilohertz, to add some presence. I also added a high shelf at four kilohertz and above, to add some needed high end sparkle and definition, as well as bring out the much needed air frequencies, that sit at 16 - 20 kilohertz and above, so to add definition to the wind, as well as increase sibilance, which sits at four kilohertz, and compromises of a sharp, whistling "sss" sound, that often needs to be filtered out in vocals using a De-Esser or a low pass filter, or surgical cut (Pro Audio Files. (2012)). When running a sound through an equaliser, it is important to know where sounds sit in the frequency spectrum, as it helps to speed up the procedure, as well as make mixing easier and more precise, for example, knowing that warmth and fullness, sit around the 100 hertz to 300 hertz mark, allowed me to apply my low shelf tactically, so to get the low end and rumble out of the sound. Likewise by knowing that mud and boxiness sit around the 200 hertz mark, I was able to remove them by scooping the low mids at 200 hertz, with a wide surgical cut. I then removed the surplus boom and bass/sub frequencies with a high pass filter starting from sixty hertz (Battersby, P. (n.d.)).



I also made a magic missile/offensive type spell sound. This sound was intended to be somewhat aggressive and alarming, almost like a rocket or bullet being launched to wards the opponent, as this sound would represent the spells used to do direct and instant physical damage to the opponent. For this sound, I used the TableWarp2 synth engine, once again, and started with some dark noise, which is essentially a collective term for anything other than white noise (which is the static sound that can be heard from radios or TVs) or grey noise, which essentially is like white noise, but contains equally balanced frequencies, and sounds reminiscent to that of the inside of a jet powered aircraft during flight. Dark noise is a present on the synth and essentially may refer to anything from pink noise, which is white noise that decreases by 3 decibels per octave, meaning that it gets quieter, the higher it gets, but also dark noise may refer to red noise, which is where power density, which is the rate of power output, achieved with increasing loudness, decreases with increasing frequency, meaning that it becomes quieter at higher pitches, giving out less energy (energyeducation.ca. (n.d.)).


I then sent the noise through a square bend envelope filter, which added a weird droning and pulsating modulation to the noise, which was also blended with a Xor wave, that was being sent through a Saw bend envelope filter. I set the ADSR filter to have a fast attack, a faster decay, a short sustain and a mild amount of release, which helped shape the sound dynamically, with the attack, which controls how fast the transient peaks, making the sound have a sharp and instant transient, and the decay, which controls how the transient fades out, giving it some fade out, and sustain, which controls how long it sustains throughout the attack and release phases, when the trigger is pressed, made the sound short and instant, since the player will not hold the trigger for long, however the release made it have a long fade out once triggered, since release controls how long the sound keeps going once the trigger is released (in this case once fired, with a mouse click/key press, the sound will continue to play for the short time set on the release parameter, until it finishes (Swisher, D. (2019)).



I ran the sound through the MT-A distortion pedal, by Mercuriall audio, which emulates a boss metal zone, and cranked the distortion fully, whilst keeping the volume (level) at 12 o'clock, which gave the sound some grit and fuzz, which made the sound more electric/fiery and rough sounding, like a missile of pure energy. On the pedal, I turned down the high mids, to remove the ear piercing fizz, and turned down the high, so that the sound had less presence, while still retaining the fulness of the low end, which I cranked to almost all the way, to ensure the sound has a big and full sound.




I then proceeded to run the sound through Logic Pro's Pedalboard sim, and apply the Dr Octave, which is an octave pedal that ads bass notes - one an octave below, and one two octaves below. I also left the dry (direct) sound fully in the mix, with some additional drive. I used this to make this sound big and hot and fiery. An octave pedal, essentially uses the dry sound of the input, such as a guitar, and shifts the pitch of said input an octave (12 semitones) up or down, and may add one or more of those octaves, and blend them in with the dry (original, un-shifted) sound of the input, resulting in a sounds that sounds sonically wide due to the resulting harmony, which essentially sounds like an orchestra, with one note being like a chord, a popular example of an octave pedal is the POG by Electro Harmonix (Andertons Music Co. (n.d.)). So I used the pedal, to make the attack sound seem like huge and powerful, due to the large sound.



I ran the sound through an Equaliser, which I used to add a high pass filter, removing the extremely low sub bass, so it does not sound too rumbly, and I also turned down the mud with a slight scoop between 100 and 200 hertz. I also turned down the tinny and scratchy sounds at one kilohertz, and added a low pass filter, from two kilohertz and onwards, which turned down some of the fizzy and harsh presence.





To finish off the sounds entirely, I made a lightning/storm attack sound with Spitfire LABS' Organic textures, with Thunder and Rain sounds. I ran this sound through ChromaVerb, to make the sound seem huge and airy, with plenty of decay, as if it's resonating into the atmosphere, from a huge storm. I used a Dark Room preset, however I changed the parameters on it to suit the sound, and turned up the Dry parameter and Wet fully. "Dry" here means that the sound is unaffected by the reverb, and by cranking it all the way, it remains layered in with the reverb induced sound, which is the "Wet" (Gateway, M. (2019)). I also cranked up the size parameter, all the way, to make the sound appear bigger and more roomier, with size essentially controlling the spread of the sound and size of the acoustic setting that ChromaVerb is applying the sound through. A wider and bigger room will make the sound bounce around and spread slower, and sound more hollow and airy when picked up by our ears or a mic (WebFX (2020)).







Evaluation:


Looking back at the start of the project, the aim of this project was to show us a different perspective of a different job in the music industry and likewise increase our skills in a different subject, other than just composition, live sound or performance. We were given the opportunity to choose between performance - which I wanted to originally do, which consisted of covering popular music songs in a different genre, or doing production, which consisted of giving us an insight into the film and game music industry, where we would be looking into different scoring techniques as well as sound design techniques, to make an OST, which is an original soundtrack, that consists of a score and a sound library, as I mentioned in my first post of the project. I was originally going to do performance, however I was discouraged by the setlist and the fact that my group had a massively different music taste to me, meaning that we wouldn't be making anything as dark, evil and heavy as I wanted, whereas I was very much into making my own music scores, being heavily inspired by the scores of different video games, such as the Heroes of Might and Magic series, the Diablo series and the StarCraft series, often using similar styled orchestral and choral composition elements, as well as folk/baroque/medieval elements, in my personal works, as Shadow Von Nyx. I therefore chose the production pathway, to both improve my sound design skills as well as learn more about scoring and composition, and boost my orchestration and scoring skills, so that I can use them more effectively in my personal works.


The techniques I learned about during the project included Leitmotifs, Filed Recordings, Foley, Mickey Mousing and Underscoring. I have used most of them in my works in the project, and have developed an understanding of them and learned how to apply them in my work. One of the techniques that I have used multiple times during the project is Leitmotifs. A Leitmotif is a scoring technique that involves composing a short piece of music that will represent a character, a situation, a concept or an object, an example of a leitmotif, as I stated previously, can be something as simple as the two note piano riff, that plays whenever the shark appears in the film Jaws. In this case, I have composed two Leitmotifs, consisting of short pieces of music, that will play during battle scenarios, called Battle 1 and Battle 2. These are inspired by the battle tracks from Heroes of Might and Magic IV, which play when a player battles.


Another technique I used is Underscoring, which I relied on heavily in most of my score. Underscoring is when the score (the music) of a film or game, directly complements or emphasises what is happening in the game or film. For example, the intro Cinematic to StarCraft Brood War contains an ambient/rock score, that complements the chaotic and tragic battle that happens in the cinematic, which later changes to a "calmer" yet more sinister tone, when the scene changes to show the main protagonists, Gerard DuGualle and Alexei Stukov, watching over the battle in a safe space from orbit, in their command ship, and discussing their plan of taking control of the sector, there the score has more of a chamber/opera style piece, playing in the background, which is both a bit sinister, but also slightly calmer than the previous scene. The music then changes into a sinister twist, and fades more into focus, as both characters decide to abandon the colony, and lift off into orbit, leaving the overrun colonists to fend for themselves, as I have previously explained in my previous posts. Similarly, I used underscoring to emphasise different scenarios and ideas, for example, the leitmotifs that play during the battles are much more angry, chaotic and aggressive than the laid back "calm before the storm" piece that plays in the demo video where the character explores the level, and likewise, it also differs from the more laid back main menu music, which sounds more glorious and majestic.


I have also made good use of Foley, which, as I stated previously, is the sound effects of the game, mostly of people moving. I have made armour sounds and weapon sounds, as well as spell-casting sounds. For spells I used synths mostly to make the sounds, to make them sound weird, alien and ethereal. Likewise I made different footstep sounds, which are literal one shot samples that will play when the characters walk and step on different surfaces. For different surfaces, such as grass and stone, I used different footstep sounds. While many often achieve these sounds with field recordings, which are when the foley artists go outside the studio with a literal recording set up and record samples of actual footsteps or sounds, in an "uncontrolled environment" such as outside in a forest (Acoustic Nature. (n.d.)). For the footsteps however, I used the ESP synth engine on youtube, with different parameters set on the ADSR envelope, which stands for attack, decay, sustain and release, and controls the dynamics of the sound and its duration, to create the sounds of footsteps impacting different surfaces, and often I layered sounds together, to blend them in order to create different sounds entirely, for example I blended high frequency white noise with short decay and sustain and a medium release, and a slow attack, together with the sound of a pitched down white noise, with a fast attack and virtually no DSR, to create the aural image of a foot stepping down on grass land, with the white noise making the actual grass swish sound. I decided to not use field recordings, due to the problems I encountered with background noise, created by the ambience of Lichfield, which consists of cars and people, which were difficult to filter out with a noise gate, and unavoidable at the same time, as I have experienced when making a field recording for the soundscape.


The soundscape is the background ambience of a game or film, as I have explained in a former post, which plays in the background of a scene and essentially serves to add realism, making the whole scene and setting more convincing, by using actual background sounds of that scene. The game I worked with will be set in ancient Greece, which is set in the Mediterranean, and has plenty of open ambience, and shore settings, due to being close to the sea. Unfortunately, I faced the aforementioned issues when trying to record wind sounds during my field recordings, and so my field recordings failed, however I did resort to a plan B, which is to get an open ambience/meadow sound off YouTube, which I did, and merged it with occasional sounds of birdsong from Spitfire LABS, and ended up making a good soundscape.


A final technique that was covered during the project was Mickey Mousing. This is when the music emulates the diegetic sounds of the scene (diegetic sounds being sounds that the characters in game, or in film can hear, as opposed to non-diegetic which refer to things like music (the score), or other things which the character cannot hear). The term Mickey Mousing was coined by the early Mickey Mouse videos which often had the background music emulate things that were happening on screen, to add more emphasis to the scene, for example the brass section would play staccato honking sounds, to emulate the sounds of a car horn, which was being pressed in the scene. This technique is rare in video games, and I personally could not find a use for it in my project, due to the lack of visual stimuli that could have worked well, and the chaos that normally happens with war, although, one can argue that I have attempted this in one of my Battle Leitmotifs, where I used a staccato sound of a trumpet section, to emulate an ancient greek Salphnix, which was a brass instrument used in war, in ancient greek times, and likewise I used tom tom sounds to emulate the sounds of war drums, however there are no actual visual stimuli of this happening, so it technically is not Mickey Mousing, which is why the technique is arguably unused in game scores, because of how much the action that happens on screen, in video games, is dependent on the player(s), as opposed to a film, which has been specially choreographed and is pre-recorded, and will play back the same way every time, meaning it is easier to match music to things that are happening specifically in the scene. It is however done to an extent in video games, as seen in Heroes IV - Battle 6, where the music emulates the sounds of sword being drawn.


My research also helped with the project, due to having a large knowledge of video games and game music, having played games with good music, I drew loads of inspiration from them. These games contained OSTs which often consisted folk, baroque, medieval, rock, ambient or orchestral scores, which are often the types of music that inspire me as a person and inspire my music in general. I also researched how to orchestrate. Orchestration is essentially the "coloration" of music, with different instruments, such as those found in an orchestra. Together these instruments play and turn a simple riff into a huge and powerful sounding ensemble and add power and weight to what otherwise just be a lyre. I looked into music theory to improve my skills of orchestration, and found different ways I can make the many samples of different greek instruments, such as choirs, drums, the salphnix, the hydraulis (water organ), the pan flute and the lyre, work together, as an Ancient Greek orchestra. I went down the rabbit hole of music theory, by researching what scales were used in Ancient Greece, and found that greek music was actually microtonal, which as mentioned on former posts, meant that it did not only operate in the 'conventional' western system of tones and semitones, which allowed me to try and experiment with the music to ensure it is more "dissonant" and sounds different than just a bog standard folk score. I researched different scales, and found contrasting information on what scales were used, and figured out that I should use the chromatic scale, to ensure that music sounds both dissonant and "liberal" enough to even sound convincing despite being in the western notation of tones and semitones. I also looked into how I can make my music microtonal, and used my guitar as a MIDI controller, which allowed the resulting sounds to be microtonal, due to how a guitar has many micro tonalities and harmonics, as opposed to a keyboard instrument.


I feel that the project has gone well, especially the composition of the music itself. I also liked how the foley turned out too, and it mostly sounds really good. If I could do anything differently, I would probably make some field recordings away from Lichfield, like the fields in Barton Under Needwood, meaning I would get a better soundscape, and likewise I probably would have recorded some actual footsteps if I had the right premises and equipment to do so. I do think that the project has gone well however, and I am pretty happy with how it turned out.







References:


Acoustic Nature. (n.d.). What is Field Recording: History & Uses. [online] Available at: https://acousticnature.com/journal/what-is-field-recording.



Anon, (n.d.). How to Fry Scream – SING AND SCREAM. [online] Available at: https://singandscream.com/how-to-fry-scream/#fryscream [Accessed 4 Mar. 2022].


Anon, (2018). How To Sing With Power (Glottal Compression) - Bohemian Vocal Studio. [online] Available at: https://bohemianvocalstudio.com/how-to-sing-with-power-glottal-compression#:~:text=It [Accessed 4 Mar. 2022].


Audio, S. (2020). What is a Round Robin? [online] Spitfire Audio. Available at: https://spitfireaudio.zendesk.com/hc/en-us/articles/360025864833-What-is-a-Round-Robin- [Accessed 3 Mar. 2022].


barbarabrownie (2015). The Sounds of Undressing. [online] Costume & Culture. Available at: https://barbarabrownie.wordpress.com/2015/10/04/the-sounds-of-undressing/ [Accessed 28 Feb. 2022].


Battersby, P. (n.d.). Blog Post: Mixing. [online] Virtual Playing. Available at: http://virtualplaying.com/interactive-frequency-chart/.


Dixon, D. (2019). Mix Bus 101: Why, When, and How to Group Tracks into a Bus. [online] iZotope. Available at: https://www.izotope.com/en/learn/mix-buses-101.html.


energyeducation.ca. (n.d.). Power density - Energy Education. [online] Available at: https://energyeducation.ca/encyclopedia/Power_density.


Farrant, D. (n.d.). What Are Dynamics In Music? A Complete Guide | HelloMusicTheory. [online] https://hellomusictheory.com/. Available at: https://hellomusictheory.com/learn/dynamics/.


Gateway, M. (2019). What Is Reverb? Reverb In Music Production & Mixing Explained. [online] Music Gateway. Available at: https://www.musicgateway.com/blog/how-to/reverb#:~:text=Dry%20simply%20means%20without%20any [Accessed 7 Mar. 2022].


Gemtracks Beats. (n.d.). What Does Sonically Mean in Music? | 2022 Music Production + Theory Guide. [online] Available at: https://www.gemtracks.com/guides/view.php?title=what-does-sonically-mean-in-music&id=1041 [Accessed 3 Mar. 2022].


Heckmann, C. (2020). Leitmotifs and Musical Themes Explained. [online] StudioBinder. Available at: https://www.studiobinder.com/blog/what-is-a-leitmotif-definition/.


Hochstrasser Electronics. (n.d.). An Introduction to Envelope Filters. [online] Available at: https://www.hochstrasserelectronics.com/news/introductiontoenvelopefilters [Accessed 4 Mar. 2022].


inSync. (2016). What is “Comping”? [online] Available at: https://www.sweetwater.com/insync/what-is-comping/#:~:text=In%20musical%20terms%2C%20%E2%80%9Ccomping%E2%80%9D [Accessed 3 Mar. 2022].


Joey Sturgis Tones. (n.d.). What Is Music Post-Production? [online] Available at: https://joeysturgistones.com/blogs/learn/what-is-music-post-production.


Pro Audio Files. (2012). Sibiliance: Definition, Frequencies & Tips For Controlling Vocal Sibilance. [online] Available at: https://theproaudiofiles.com/vocal-sibilance/#:~:text=Vocal%20sibilance%20is%20an%20unpleasant.


Swisher, D. (2019). ADSR: The Best Kept Secret of Pro Music Producers! [online] Musician on a Mission. Available at: https://www.musicianonamission.com/adsr/.


The Martial Way. (2016). What is “Kiai”? [online] Available at: http://the-martial-way.com/what-is-kiai/ [Accessed 3 Mar. 2022].


WebFX (2020). Room Acoustics 101 | How to Get Good Room Acoustics. [online] Illuminated Integration. Available at: https://illuminated-integration.com/blog/room-acoustics-101/#:~:text=Many%20large%20rooms%20sound%20hollow [Accessed 7 Mar. 2022].


Wreglesworth, R. (n.d.). What’s the Difference Between Dynamic and Condenser Microphones? [online] Musician’s HQ. Available at: https://musicianshq.com/whats-the-difference-between-dynamic-and-condenser-microphones/#:~:text=The%20difference%20between%20a%20dynamic [Accessed 28 Feb. 2022].


www.perfectcircuit.com. (n.d.). Learning Synthesis: Noise - Perfect Circuit. [online] Available at: https://www.perfectcircuit.com/signal/learning-synthesis-noise.


www.youtube.com. (n.d.). 3 Steps For LOUDER Mixes - Metal Mixing Tips. [online] Available at: https://www.youtube.com/watch?v=8TIIzwRdlqk [Accessed 4 Mar. 2022].

 
 
 

Recent Posts

See All

Comentários


©2019 by Shadow Von Nyx. Proudly created with Wix.com

bottom of page