top of page

Production with Geoff (Billy Blue)

  • Nov 10, 2020
  • 8 min read

This week, we practiced some production, in order to prepare us for the next phase of the project, which will be production. We practiced by recording Geoff performing the song Billy Blue. The first thing we did was record Geoff’s acoustic guitar, by close-miking, this is a technique that involves micing up specific parts of the guitar or instrument - https://www.dpamicrophones.com/mic-university/10-points-on-close-miking-for-live-performances#:~:text=Close%2Dmiking%20is%20the%20term,listening%20distance%20of%20that%20source. Close Miking is done in order to obtain the highest sound pressure level of the sourc


e or instrument that is being recorded, and thus giving a clearer mix, In this sp


ecific case, we placed a microphone directed at the strings and sound hole of the guitar, which captured the raw and organic sound of the strings and picking, as


well as the body of the guitar, giving it a warm and full texture, with generous amounts of low end. The output of the microphone was tracked in Logic Pro X , as an audio recording.






An auxiliary microphone was also placed, directed towards the fretboard of the guitar, this was also close miking, however allowed us to pick up a different sound of the guitar; having been directed at the fretboard, the microphone was picking up sounds of the strings and fretboard as well as more high end of the guitar. This was also tracked in Logic Pro X, but as a separate track, allowing two different sounding tracks from one instrument, and for both to be mixed separately. When mixed in Logic by myself, I put both

tracks in a Summing Stack - this is a small folder that allows the tracks to be organised, and any effects, such as Equalizer or Compressors, that are put onto the Stack, will affect every audio file within the stack, allowing a group control, as well as individual control, when the stack is opened.


When recording Geoff playing the song, we needed to make loads of takes in order to ensure that the song is perfect. When recording different takes on the same track in Logic, comp files are created, these are essentially compiled files of takes, that allow the user to edit out bits before flattening it into one file and leaving out the bad bits - https://support.apple.com/en-gb/guide/logicpro/lgcp317d758e/mac#:~:text=You%20can%20select%20the%20best,the%20contents%20of%20take%20folders.

I mixed the guitar by removing the excess high end at anything over 10khz with a high pass filter. I then did some surgical EQ - this is where the Q of the EQ is very thin - the Q frequency referring to how wide the peaks are, which in turn affects how many frequencies they affect https://www.sweetwater.com/insync/surgical-eq-vs-tone-shaping-eq/#:~:text=Sometimes%20a%20precision%2C%20%E2%80%9Csurgical%E2%80%9D,of%20the%20track%20or%20channel. The surgical EQ i did by first thinning the Q of the peaks, boosting the frequencies and then sliding the peaks around, using my critical listening to figure out where the nasty resonances lie. Unsurprisingly, there was harshness in the high mids which, once located I cut out using subtractive EQ, which is when you remove from a track using EQ. I also did some additive bell EQ. Bell EQ is a wider variant of the surgical EQ, that allows to cut out or boost different frequencies https://www.sonarworks.com/blog/learn/eq-curves-defined/ - In this case I used an additive Bell EQ, meaning that I boosted the midrange a bit with a wide peak, so that the guitar cuts through the mix. I also removed the low end at 100-500 hz, as that is were the mud was, and I needed to make space for the bass guitar and bass drum.





We then recorded Geoff’s vocal, by tracking him on another track in Logic, using a condenser Mic to record Geoff’s voice. When using a condenser microphone, one has to be aware of the fact that the microphone has a large diaphragm, and therefore is able to pick up very quiet and very small sounds, meaning that it needs phantom power to work, in order to power its amplifier, but also can be damaged by overly loud sounds. We also needed Geoff to stand decently far from the microphone, in order to not create a proximity effect - this is where the microphone picks up more bass due to the user being at a closer proximity, when two microphones are too close together, this can also be a problem as it can cause phasing. https://www.neumann.com/homestudio/en/what-is-the-proximity-effect. We used an Aston Spirit condenser microphone for the recording, which was handy, because the spirit has assignable Polar Patterns - polar patterns indicate how and where a microphone will receive its input, these can be cardioid, supercardioid, figure of eight and omni, all of which differ and affect how a microphone can be used and what it will pick up, for example, a cardioid microphone is great for live recording and gigs, as it picks up all around the front and sides of the capsule, meaning that it will pick up the sound of the singer and band, whilst leaving out any noise of the audience or other noise behind it https://www.lewitt-audio.com/blog/polar-patterns, we used, I believe, the cardioid setting on the microphone, which captured only Geoff’s vocal. However due to the large sensitivity of the condenser microphone, we also picked up loads of room noise, such as the sounds of a drill, which I tried to get rid of in the mix, by the use of corrective mixing - corrective mixing is mixing done to correct and improve the sound that has been recorded, by removing sharp frequencies or adding a noise gate to remove the drill in the background.

We also tracked several instances of Geoff at the choruses, singing in a different octave, going all the way up to falsetto, to create harmonies, this technique is known as double tracking https://en.wikipedia.org/wiki/Double_tracking#:~:text=Double%20tracking%20or%20doubling%20is,a%20single%20voice%20or%20instrument this is where multiple instances of the same verse or passage in a song are recorded to give it width, and texture as well as make it seem bigger as though there is multiple people singing/playing, and allowing it to cut through more. Oftentimes, guitars are recorded by panning one track to the left and one track to the right, to make it seem massive in stereo, but this can be done with vocals or other instruments.


I removed the excess bass (anything from 100 hz or lower) by a high pass filter, then added a low pass filter to remove the air and excess high frequencies in the treble frequencies. I added a high shelf that boosted anything from 3khz to 10khz, to make the sound high and to cut through more, a shelf EQ is a flat rise at the high or low end that rises up the high or low end frequencies, and resembles a shelf https://rebootrecording.com/high-and-low-shelf/, I then did some EQ sweeping with a surgical EQ, to figure out where the sibilant frequencies were, sibilance is essentially a sharp S sound that can be overly harsh if not tamed with an EQ. Unsurprisingly again, the sound seemed to originate in the high mids at around 2khz, so I used a surgical EQ with a slightly wider Q to cut it out entirely.




I also did some creative mixing on the vocals - this is a style of mixing that means adding your own “flavour” to the mix and spicing it up, rather than being a mandatory process to correct the mix, so in this case, I used HeavierFX from Heavier 7 Strings by ThreeBodyTech, to add some effects to Geoff’s Voice. I first added some chorus, which made his voice wider and bigger in the mix, I also added a large amount of reverb to make it seem like a gig or live recording and give it loads of atmosphere, I also wanted Geoff’s voice to have some delay and echo on at certain times, within the song, such as on the choruses, so I used a Ping Pong Delay pedal, and made it come on and off at will, by the use of Automation:



We finally recorded a Bass, by the use of an interface. The interface allows the Bass, or any other electric instrument, to be plugged in to it by using a quarter inch jack, and plugging it into the interface, then the interface into a device that can record audio, via a USB cable. This however means that only the raw sounds of the instrument are recorded, meaning that in order to get a full sounding sound from the Bass, we needed an amp simulator - I used a Bass Amp Simulator from Heavier 7 Strings' effects rack, by ThreeBodyTech, with a matching cabinet, with some bass Impulse responses, I also used a Booster pedal to boost the gain of the low frequencies of the bass.




In order to mix the Bass, we needed to make room for it in the theoretical “box” that is mentioned by the cube theory - In essence this theory explains that every frequency within a song sits in the confines of a box, and in order to be heard, needs its own space, so therefore cutting everything below 100 hertz in every instrument, using a high pass filter, and boosting the aforementioned frequencies in the bass guitar and bass drum tracks, will allow the low end bass instruments to be heard amongst the other instruments. In order to add some crunch to the bass, we boosted some high end so that it can have some audible crunch and string noise.






Afterwards, a drum beat was added using a virtual drummer. I used Steven Slate Drums 5 and used some of my own samples, to emulate the drums, and added a snare sound, as well as a shaker and gave the bass drum much more size and power, by using a sample of a 36 inch Ludwig concert bass drum, I also routed every drum to a different output for further mixing and turned the track to a midi track for better control.




I then Equalized the Bass Drum and the Overhead, adding more punch to the bass drum by boosting the 4khz mark, where the punch and attack frequencies live, and also boosting it’s bass, but slightly higher, making sure to make room for the bass guitar.



I added more treble to the Overheads and made them cut through more, as well as cut out loads of Bass with a high pass EQ to make room for the bass instruments. I also did some surgical EQ and removed a harsh resonance or boxiness from the mids. I added some slight bass at the 400Hz mark to add some fullness and make them sound less brittle.




Finally I added some compression to the whole kit to increase the attack and tightness of the kit. A compressor also makes louder sounds quieter and quieter sound louder, meaning it helped balance out the sound when it came to volume https://www.izotope.com/en/learn/audio-dynamics-101-compressors-limiters-expanders-and-gates.html#:~:text=Simply%2C%20a%20compressor%20is%20used,when%20it%20gets%20too%20loud.



I used the graph interface to figure out, how the compressor was impacting the sound wave, and added some makeup for extra gain, decreased attack, meaning that the compressor would come in slower and allow the drums to hit harder, and brought up the release, so that the compressor decompresses faster, giving the drums some time to resonate.




I also finally added a small synth piano/bell sound at the choruses to match with the vocal melody and add to it. The final product was like this: https://drive.google.com/file/d/1MbTQyqXYiWjpl__RpW5NUbTv-pdfxpO5/view?usp=drive_web


The things i took away from this process were the eq techniques and what the look like, as well as subtlety, and perhaps when working on my own work, I will be able to implement the usage of EQ better, having understood it better, and tied any loose ends. I also realized how comping works, meaning that when I am tracking my guitar, I will be able to now use comping effectively, and I also realized the importance of double tracking and recording two takes to do so instead of copying and pasting the result and moving it or time stretching it, as doing the latter makes it seems less natural.


 
 
 

Comments


©2019 by Shadow Von Nyx. Proudly created with Wix.com

bottom of page