Propellerhead Sound Cards & Media Devices Driver Download

Published: 2017-08-31

Great deals on Propellerhead Pro Audio Equipment. It's a great time to upgrade your home music studio gear with the largest selection at eBay.com. Fast & Free shipping on many items! Browse Propellerhead products and enjoy free shipping on thousands of Propellerhead gear & 30 day returns. Potential purchasers should note two things: (1) there is a nice demo version available for download on the Propellerhead website, so you can decide if you like the way the program sounds before you buy, and (2) you will need a MIDI keyboard, a decent sound card, and good monitors (speakers) in order to enjoy the program. If you can, use the ASIO (Audio Stream Input/Output) driver, as it usually has the lowest latency and best interface with the sound card. If you don’t have that, try using the DirectX driver, or whichever driver has the lowest latency, which is shown right below the “Buffer Size” slider. Note: Wattages are estimates only. Actual power draw may differ from listed values.


I think it is time for a technobabble article about Propellerhead Reason. By reading different Q&A posts (Reddit, forums and so on) is that there is a lot on the subject about Realtek ASIO drivers and ASIO4ALL drivers. These subjects always come back. DirectX drivers are usually not really a subject matter but I am going to bring these on the table and discuss it. Just to put this in to context, the topic about audio card drivers has a history. First of, there is latency. We all know that latency is bad because of a few things: the responds time of what is happening on screen needs to be in sync with what you are hearing. Second, if you play a midi instrument a high latency will be a killer to work with. So in most of those cases you want to have a lower latency as possible but still enjoy what you are playing / making.

With that intro out of the way, let the technobabble begin

Why ASIO drivers?

Lets start with the basic question: Why should I use ASIO?

ASIO stands for Audio Stream Input and Output. It was a technology originally designed by Steinberg to make use of Input / Output directly towards the AudioCard. In normal Operating System operations the OS would sit in between to determine what gets processed towards the sound card and different hardware (since this is normally determined by the OS itself). The operation on how to access the soundcard usually is different per Operating system. So from that angle, ASIO made it a lot easier to work with especially when it comes to Digital Audio Workstations. Because the I/O is handled directly towards the audio card the stream itself can be processed much faster.

Just to put this in a diagram. The following flow could be seen as the audio / video outputs while it is handled on a Non-ASIO system:


If you compare that with the ASIO flow, it becomes a different story since it would be look more as followed:


Please take these diagrams with a grain of salt. Since I have taken a lot out of the picture since there is just more to it then this. But understanding how ASIO works in terms of communicating directly with the Audio card, that is the most important part to know. Because this also leads to a lot of issues with ASIO to begin with.

So the answer to the question: why Asio?
Anwser: because it communicates directly to the Audio Card.

Why are there different ASIO drivers?

This brings us to the next question that seems to pop up all the time. Just to name a few: RealTek Asio drivers, Novation ASIO Drivers, Hercules ASIO Drivers and ASIO4ALL. The thing here is, some hardware companies decided to make an ASIO driver for their product. This allowed the hardware manufacturer to make the ASIO driver very specific for their product. The problem here: maintaining the driver to the latest standards is a time consuming task (since they need to build the individual drivers for the OS AND the ASIO drivers for the different OS). The increase of workload increased.

The answer to many manufacturers their problems came first in the form of ASIO4ALL. The major benefit with ASIO4ALL is that it acts like a transparent bridge to talk to, yet still have the benefits of communicating with the Audio Card driver directly with out having a chit-chat with the operating system in between. Because Asio4All was such a specific standards to work with a lot of hardware manufactures decided to stop producing their own brand for an ASIO driver. Realtek ASIO would be one of those that would stop being developed (there is another reason for this, I will get back to this).

Because at first ASIO wasn't that transparent, different companies decided to make their own unique ASIO driver (based on the specifics of Steinberg their original ASIO driver). Asio4All kind of made it obsolete, yet still there are companies that make specific drivers for specific reasons.

This usually leads to the next question.. (tada)

What Audio Driver in Reasion should I choose?

For the Pre-Windows 10 reason are (I will call it just that), the drivers you can normally pick from are as followed:

  • MME Drivers (always supported, highest latency)
  • Direct X (not always supported, average latency, when it works then this works flawless)
  • Company Asio Drivers (if you have the company specific ASIO driver, go for that)
  • ASIO4ALL (low latency, has issues)

When you do audio recording (eg with a microphone) then there is only one option left:

  • ASIO4ALL (low latency, is the only option that allows you to record audio)

Why ASIO is becoming an outdated driver?

From a recording perspective and Propellerhead Reason that is, we will always need (till this day) need ASIO drivers to record. DirectX drivers have been improving since Windows 7 was released. Since the early 2012 the HAL (Hardware Abtraction Layer) has been changed. Perfomance of Mac-OSx has improved a lot (kernel was renewed). While software manufacturers / hardware manufacturers are strongly having an influence on what they expect from either a PC / Mac their Operating systems need to adapt on these as well. Latency is just one of those things.

As additional information, a few months ago Microsoft released an updated article about their latency changes on audio interfaces and how the Operating system handles them: Low Latency Audio. They have to make these changes because the mobile market kind of dictates them to do this. So from this angle, it makes sense that Operating system drivers will have way better responses then what they would have back in 2011.

Just as a fair comparison. My computer has not changed since Windows Vista was there (I have my computer for 7 years now, only the video card has been replaced and the power adapter). I also want to point out I have been running Direct X drivers since day one. Just because of the fact that I do complicated Video Recording sessions and I can not use ASIO in those type of sessions (this gets very specific).

Back in 2007 I had a latency of around 128ms. I adapted with this latency and got used to the idea that it was there while making videos.
In 2012 I had a latency around 24ms while running Windows 7. Which was quite rock solid after the HAL was updated.
In 2017 I have a latency of around 2ms. (that is the maximum I can get in certain situations).

This is what I use a proof of concept that latency and direct-x has been decreasing over the years. There are several articles that will address a similar or same conclusion regarding this topic. The only reason why we still need ASIO drivers, is just because Propellerhead Reason does not allow an Audio Input when the driver is not ASIO (Which I personally don't get at this point. But ok, that is a different discussion).

Another reason why ASIO might become a bit outdated as a concept: updates on ASIO are slow. They barely change features, nor really adapt new features. The obvious reason is that the idea behind sending Input Outputs (and the concepts) will mostly be the same. The manufacturers of new hardware needs to adapt to the protocol itself (kind of like VST). This kind of makes sense at some extend. But while Operating systems are closing in on similar lower latency as ASIO drivers do, at some point in time I will see that ASIO would disappear in total. It is just a matter of time when this happens.

Problems with ASIO drivers

This kind of brings me to the last segment of this article. The problems with ASIO drivers are greater then using a Soundcard with lets say Direct-X these days. Back in the days that Propellerhead wrote their 'What Audio Driver should I use' they state the following:

  • Not all cards come with DirectX drivers. However, drivers for some cards are included with DirectX itself.
  • Using a card via a DirectX driver gives you a shorter latency, between 40 and 90 milliseconds.
  • If you use DirectX 3 or later, all programs that access the card via DirectX and make use of the DirectX “secondary buffer” feature can use it at the same time, and Reason Adapted version 4 can play in the background.

This says something. Because:

  • Every soundcard comes with DirectX drivers these days. Name me one you have bought in 2017 that does not have a Direct X driver?
  • Using Direct-X these days give a lower latency then the mentioned 40/90 ms.
  • DirectX 3 was released September 15, 1996

Now sorry to nit pick the documentation of their site. And no, I won't shut up either ;) But this kind of always goes back towards the whole discussion: DirectX vs ASIO and why ASIO is so much better. But ok, that is different discussion.

The issues with ASIO I am getting a lot from users are the following:

  • When I launch the program, I have no sound. Why?
  • When I play reason for a while the audio card starts to play 'glitches'

The first question I have answered not so long ago with the following (I'll be just lazy and copy paste it)

  • Wrong driver selected in the audio preferences: this needs to be Asio4All. If you use any other driver (like DirectX/MME) audio input will not work

  • Asio4All can't recognize the soundcard. This is inside the 'Control Panel' on the Preferences > Audio of Reason (or there is a separate tool for this called Asio4all control panel). In there it show cases which sound cards are selected inside Asio4All for inputs and outputs. If something is locked (or setup wrong) it will display an 'x' on the individual sections of the soundcard.

  • Using Asio4All + Reason + any different program using the soundcard, no sound is playing in Reason (but the other program does work). The problem here with Asio4all is that it locks the sound card on other programs. Best solution for this is to reboot the computer, then launch Reason then check the Asio4All Control Panel settings once it has launched. In most cases this usually works.

The second issue is related to any driver. Because this has something to do with buffers, the amount of samples that are played in the buffer, the amount of latency and the performance of the computer itself.

Because the issue is very specific to either the user, the computer, the setup, the drivers and so on. I there for can not give a solid answer on why certain times glitches start to occur when the buffer (and the amount of samples) is set to a specific amount. The rule of thumb on the buffers (and its safe spot) would be:

Set the Buffer size half way. This usually gives the best results regarding to latency, playback and the audio driver itself
Having the buffer size too low may cause issues regarding the output. Having the buffer too high may increase the buffer size

With this all being brought on the table while writing this all down in 2 hours time (while doing a week research on this matter), I thought like I would just share all my findings on the subject Realtek Asio (since I have this), ASIO, ASIO4ALL and the latest trends regarding to DirectX drivers


Written by hydlide
Published: 2017-08-31
Published: 2017-12-03

So you have your computer, you have your midi keyboard (or maybe not), the speakers are set to full power and you want to start making music. How do we do this? The answer to this question is: get a Digital Audio Workstation (DAW in short). While there are plenty of Digital Audio workstations out there you may require some digging into the world of DAWs before making the proper choice here. Since this site is a Propellerhead Reason site, we'll be looking at what Propellerhead Reason 10 is, what it does and how it can aid you to make music. This is the first article in a series of beginner guides to help you to understand what Reason is and how it works.

So have lots of fun, and enjoy the read!

What is Propellerhead Reason?

Propellerhead Reason is a Digital Audio Workstation that can aid you to create music. The main core features of this DAW are:

  • A Mixing console (also referred as the SSL Mixer by many Reason users)
  • A Reason Rack, this is the place where the sounds are defined
  • A Midi Sequencer and audio track sequencer

Other main features which are worth mentioning here are:

  • Spectrum Equalizers to do sculpture the EQ stages of a specific track
  • 8 different send effects per track
  • Unlimited amount of insert effects per device (the CPU is the limit)
  • Midi Out to control external hardware or software through midi
  • Sample editor in devices that load samples
  • Different tools to manipulate midi
  • Players to create songs fast (since Reason 9)
  • VST Support in Reason
  • Record music directly as audio or play a midi instrument that plays software synthesizers

And many more. While there is plenty of things to cover regarding Propellerhead Reason the above section is just a tiny bit of what this program can do. When it comes to getting started with a program like this it would always be a bit overwhelming at first. But there are a few things I will start with so you can get started by yourself pretty easy.

The learning curve of Reason

The average learning curve on learning a program usually looks like this. You have to invest a lot of time at first to understand to tools your working with. Since a DAW is nothing more than a tool. Once you start to understand the tool, you can easily adapt to make stuff work inside any program. When it is either FL-Studio, Ableton Live, Cubase, Protools or Reason. They have similar functions yet to access them requires knowledge (eg: reading chunks of manuals) to make use of those functions.

To be fair here, there is one thing where Reason is different compared to other DAWs. And this is the part you will either hate or love. The reason rack is the metaphor for making sounds. The rack is usually the scariest part when it comes to sound design or making your own custom sounds. However, it is quite easy once you start to understand the concept.

When the Rack Metaphor is new, and you start to figure out how for instance control voltage works then your learning curve may start looking like the following:

And this is usually where things tend to go wrong. The learning curve should normally be an awesome experience. This is usually done by the amount of time you spend understanding a program. Or in terms of the amount of time to understand how to make music (ever tried to learn how to play fur elise in less then five minutes time? and flawless? If the answer is yes you are either lying, or you're just lying). But I think you get the point.

Learn the things you really need. If sequencing is your thing, learn everything about the sequencer. How the workflow goes, what the tool window can do for you to make adjustments in a midi track. If mixing is your game, learn everything about the mixing console and how you can manipulate things. If you are not a sound designer and your mind is blown away by Combinators, wires and big synths then just use them as instruments. Learning sound design can come later.

Your drive should be learning how to use the tool to do your thing. You should never be dictated by the tool itself on how you want to work

With that line on black and white, let's move on.

First things you want to do to learn Propellerhead Reason

There are a few things you can do to make your learning experience an awesome journey. The first things you always want to grab are the user manuals and key commands:

  • Installation Manual (Easy)
  • Short Key commands (Just memory the once you'll need the most)
  • The operations manual (A Must read, I will get back to this)

Short keys that are often used

For the short key commands, there are a few keys you will most likely need a lot. I will just write them down just to make your life a bit easier:
F5 - Displays the Mixer window
F6 - Displays the Rack (instrument) window
F7 - Displays the Sequencer window
F2 - Brings up the Spectrum EQ (which is attached to the Mixer)
F3 - Brings up (or closes) the browser window
F4 - Shows the on-screen piano
F8 - Brings up the tool window (used for the Sequencer)

CTRL + F5 - Displays mixer window as separate window
CTRL + F6 - Displays Rack window as separate window
CTRL + F7 - display Reason as one window
By default when I launch reason I prefer to use CTRL + F5 and CTRL + F6. Because with the following keys I can toggle between the 3 of them:
ALT + TAB - Toggle between windows

On the sequencer you have the following shortcut keys for editing:

Q - brings up the arrow. The arrow is used in the sequencer to select or drag things through the sequencer window
W -brings up the pencil tool. The pencil tool is used to draw in notes manually
E - this is the erase tool. This is used to quickly delete multiple segments in the track
R - This is known as the Razor tool. While you can join segments in the sequencer or draw longer segments. The razor tool is used to cut segments (eg: cut a 1/8th note into two 1/16th notes)
T - This is the mute tool. The mute tool is handy while prototyping different things in an arrangement. The same tool is used to unmute a segment
Y - The zoom tool, for zooming in and out.
U - the hand tool is for dragging.

CTRL + Mouse wheel - zoom in / out vertical
CTRL + SHIFT + Mouse wheel - zoom in / out horizontal

Double click on a clip - Enter the Edit mode (midi note lane)

Ctrl + C - Copy
Ctrl + V - Paste
Ctrl + Z - Undo
Ctrl + S - Save the song file

Ctrl +T - Create a new Audio Track

Tab - Toggle to Rear / Front view of the Reason Rack (Rack needs to visible)

Check the preferences and settings

After installing the program it asks for certain settings that you can later on change. These settings are located under:

Menu > Edit > Preferences

Normally you have to set up these once. But in case your hardware changes (eg: a midi device gets connected to the computer) you might need to use those later on. In case you made a mistake during the installation process, this would be the place to be to make certain corrections. A lot of times you may refer to this panel once, for instance, there is all of a sudden no sound in Reason. This sometimes happens when you, for instance, use an ASIO driver. This is an issue we addressed in our article about ASIO4All. You may need to refer to use a different Audio Card driver in different situations. For instance, if you need to record something (with a microphone) directly into this program, you want to use ASIO drivers since it is the only driver that allows a direct microphone input.

The preferences are also needed once you change your hardware setup. Think in terms like if you hook up another Midi device to your computer. You will need to tell reason to use that device.

Other then that, there is barely any need to change any settings. You basically set it up once, and you barely touch these again. The only reason why you want to change these settings are:

  • Change in a default template
  • Change in sound card
  • Change in Midi device
  • Change in VST folder structure

VST Support with Reason

Within the preferences, you can set the VST folder where your plugins are located at. There are at this point a few things you will need to know to make a VST work within Reason:

  • The VST plugin needs to be 64-bit
  • Only Instruments and Effects are supported
  • The VST folder needs to be set in the preferences section (you point to it once, you restart Reason and the program will scan in that folder which plugins you have)

That is all there is to that. You set it up, restart the program and you can the VST plugins you have installed.

What are Rack Extensions?

A follow-up question might be, what are Rack Extension and how do they differ from VST plugins? The Rack extension platform in Reason is a platform that is designed especially for Propellerhead Reason. These type of plugins are made to integrate within Reason itself. While VST plugins are external plugins, they will always have their own separate layout or look and feel. So rack extensions are like rack devices specially build for the Reason Rack. Some rack extensions are already included in Propellerhead Reason 10, just to name a few: Pulsar Dual LFO, Europa, Grains, The Echo and Alligator.

Your first session with Reason

Once you are set up, you can start your way into creating your first session. I know from a lot of people who started with Reason at first will always be a bit scared about the Rack metaphor. Please don't let it bite you. The following video might give a few insights on how to get started with Propellerhead Reason.

Getting started with Reason 10

The first time you start the program, it may be looking like a bit 'too much'. If you use the tool often you will get aware of how it works. The first thing you most likely want to do (just to get awareness on what is going on) is using the Keys F5, F6 and F7 to navigate from the Mixer to the Rack to the Sequencer.

the Mixer of Reason (F5)

The mixer window is used to mix. This is something we will only need later (to balance the song).

The Reason Rack (F6)

The rack window is something you might need to tweak certain instruments, setup effects or changes different audio paths (forget this if you are new ok!). The rack overview is a nod towards the old 19-inch rack space. If you are familiar with DAWs then this one will look a bit different compared to other DAWs. If you are new to these tools you may even skip this window (it might become handy later)

The Sequencer (F7)

This is the heart of where we define which instrument is playing what en when. If you are a beginner to a DAW like this, then this screen would be the best place to start. Since from a sequencing perspective you have most of the tools at your disposal to get started right away. On the left-hand screen, you see the browser section. This contains the different instruments/effects /utilities you can use. You can drag an instrument directly on the sequencer and use the on-screen piano (press F4) to play it.

To hide the browser window you can hit the F3 key. To display it again hit the F3 key again.

The Editing Tools for the Sequencer

If you have ever used a program like Photoshop, you will have seen 'tools' in a separate window very often. In the sequencer window you have different tools at your disposal at the top of the sequencer:

These tools correspond to the keys we brought up before. So here they are again:

Q - brings up the arrow. The arrow is used in the sequencer to select or drag things through the sequencer window
W -brings up the pencil tool. The pencil tool is used to draw in notes manually
E - this is the erase tool. This is used to quickly delete multiple segments in the track
R - This is known as the Razor tool. While you can join segments in the sequencer or draw longer segments. The razor tool is used to cut segments (eg: cut a 1/8th note in to two 1/16th notes)
T - This is the mute tool. The mute tool is handy while prototyping different things in an arrangement. The same tool is used to unmute a segment
Y - The zoom tool, for zooming in and out.
U - the hand tool is for dragging.

The most obvious tools to use in the sequencer are the Pencil tool () for drawing in notes, the arrow tool () for selecting notes, selecting bars etc. The other tools have their purposes, but from a beginner perspective you will most likely hop over from pencil tool to arrow tool a lot.

Propellerhead Sound Cards & Media Devices Driver Download

Toggle to Edit mode

At the top of the sequencer, you can find an Edit mode button (shortcut key for this is CTRL +E). This button is a toggle button to quickly jump between Song overview mode (which is default) to Midi Edit mode. Once you hit the Edit mode, you are able to draw in notes using the pencil tool.

The main concept of this is that you have an overview of the song where you can see in 'bars' where which instrument is playing when. In the song edit mode, you will see the notes/velocity settings on that specific instrument you have selected. By using CTRL + Arrow Up and CTRL + Arrow Down you can hop from one instrument/track. If you have for instance 16 different tracks you can also exit the edit mode, then select the track you want to view there and re-enter the edit mode. There are different ways to go from track to track. You will have to experience yourself the best method what works for you.

A Digital Audio Workstation does it need special hardware to get started?

While in the last few paragraphs I have been talking about Midi, Midi Keyboards, Microphones, Guitars and other gear. The question often gets raised if any additional hardware is required to use a DAW. The basic answer to this would be: No! You do not need any additional hardware to get started with a DAW.

The only thing you'll need is a computer, a keyboard you can type on (like I am writing down this article) and a mouse. Really, that is basically it. Any additional brains might come in handy and a little bit of creativity to get things going. If you have seen my work within reason from time to time on my youtube channel you may have seen that some of the work I do is with just the keyboard and mouse. In the old school days (reason 4 area) I didn't even have any midi keyboards laying around to do my thing.

Midi keyboards, audio interfaces or other input devices to make life a lot easier. Since in theory, you are using 'shortcuts' a lot to do a similar task. It just depends on what fits your needs and what you want to do with it.

So let me try to help you with this and make the journey a bit more complete while getting started with Propellerhead Reason.

What hardware should I be getting?

If you are interested in additional hardware you might be needing this piece of information. It is going to be rather basic and a starting reference. While I can suggest you dig into different options on different websites (eg: google your way through this while gathering more intel about the subject). There are a few basic things you need to take into account:

- Not all hardware is natively supported within every DAW

The major problem with some devices is that they tend to be rather specific on what they can do. Just an example, there is a piece of hardware called Ableton Push 2. This product is specifically designed to work native within Ableton Live (A different Digital Audio Workstation). A similar thing could be said for Maschine. This device was built for Native Instruments VST plugin with the same name. Some parts of these tools can be used in different daws, however, some functions may not work as intended.

While the majority of Midi devices (if they are midi compatible that is) will in most cases work within any DAW.

For Propellerhead Reason there is a control surfaces list which is by default supported with the program itself. While not everything is on this list, it is almost 99% guaranteed that the device itself will work in an instant. Just because most devices use the Midi standards to talk to different programs and so on. There are however some exceptional functions that may work a bit different (or unexpected) when using a device that is not natively supported in the DAW itself.

A few common issues people have been talking about:

- Faders are not working
- Rotaries are not working
- Certain buttons do not work as expected in the DAW itself.

Download westermo teleindustri ab driver license test. This is where a platform as Remote comes in.

Remote is nothing more than a fancy way of translating specific midi signals (such as faders, knobs, buttons and so on) to do specific tasks within the program with specific devices. There is another article that dives into the subject of controlling remote. While these are basic guides, I can add that setting things up properly can be a time-consuming challenge. You only have to set it up once though. And in most cases, half of the functions are most likely going to work. So no need to worry about it.

From my experience, most of the M-Audio interfaces usually work straight out of the box. Because they have been tested and integrated with the program like Propellerhead reason. It is just a matter of searching which device suits your needs. Google around if it is supported in Reason itself. If it is not supported then ask around on social media or forums. And if all fails. Try contacting customer support (in most cases they will give a similar advice as I have given since they do not know all the hardware that is out there).

Midi and Audio

One thing you might need to understand is the major difference between Midi and Audio. Since these are two separate things yet sometimes get mixed up. From a technical point of view, Midi is a protocol standard that was created somewhere in the late 70s early 80s. This standard defines the note information from pitch to velocity (often known as 'loudness'). Midi by itself does not make any sound at all. The note information usually gets fed towards a device that can receive midi data. And that device will translate that data towards the sound. If you think in terms of a Digital Audio Workstation perspective, the sequencer is being filled up with Midi data. This then gets translated towards the device that is attached to that specific midi track playback. This device in Propellerhead Reason can then be either an instrument, Rack Extension or VST plugin.

Audio, on the other hand, is just 'raw' data. In terms of sound, it is just what it is. The audio snippet will play the same way as it is put on an Audio Track. Because this requires less processing, audio tracks are often less CPU intense to play back vs converting Midi Notes to a device that has to play everything back in real time. The downside with audio tracks is that because they are 'raw' data, they are harder to manipulate. There are some creative effects you can apply to them. You can chop audio up and sequence them differently. But in many ways, you will have less freedom when it comes to manipulating audio.

From a beginners point of view, this type of information might just be quite useless to read. But later on, it might become a bigger role when it comes to the way you work (hence I am bringing up this subject matter).

If you are someone who plays a real guitar and want to record this real-time inside a Digital Audio Workstation then at some level of degree you only need to learn the basic steps on how to record several takes and learn about the Comp Edit mode. Maybe learn some effect processing to make it sound just a little bit more polished and you have the basics nailed.

If midi is your thing then you will most likely love a Digital Audio Workstation like Reason.

Audio tracks in Propellerhead Reason

Bounce Tracks in Reason

While I have addressed the two options to create sounds, you have midi tracks that are connected to an instrument inside Reason (or external hardware synth). At the same time, you have audio tracks which are raw audio files. The downside by using midi notes only and some heavy CPU usage on some instruments you could decide at some point in time to bounce the 'heavy load' to an audio track so you can still work on the track as is. The downside while working with an audio track is that they are as they are. There are effects that you can apply to them while making changes to the sound output. The major benefit by bouncing down tracks to an audio file is that they use way less CPU usage. Sometimes you may need to use the bounce tracks to audio to prevent things from stopping (because the CPU can't handle all that awesomeness).

Workflow in Reason

If you look at the workflow of Reason there are different ways you can look at how to make it work for you. There however different scenarios that come to mind and every scenario may as well require their own unique workflow to work with. This guideline will just present a few different workflow method. Once you start working longer with this tool, you may adopt and define a workflow of your own. In theory, there are the following:

  • Sequencer oriented
  • Rack oriented
  • Hybrid method
  • Live oriented

In the next few paragraph, I would like to address the major differences between these 4. Since they work quite different from one to another.

Sequencer oriented

The sequencer defines every note played in a sequence (hence the name). In theory, you can use this Digital Audio Workstation by only looking at the Sequencer alone. Once you press F-3 this will open up the browser window and in this case, you can drag and drop everything from the browser window right on top of the sequencer. This includes patches, sound files (as individual hits), long playing audio files (that you like to chop up later) etc. While using this method you focus mostly on the track while placing notes, recording your own sounds (for instance by playing a guitar or midi keyboard). You do not have to have any sound designer skills to make this work for you (more about this later). The sequencer oriented approach (which is a matter of starting your Reason session by hitting the F-8 key) is all about making music. Not about fine-tuning every setting that is out there. It is a fast paced approach to get stuff done and it works fine as it is.

Rack Oriented

While most people who start with Propellerhead Reason as a DAW, they often are overwhelmed by the look of the Rack. Some people are quite Rack oriented regarding to workflow. This workflow does not require the sequencer at all because the rack is in theory seen as a sound design tool while making the sequence work within the rack itself. All rely on wiring different cables and making the connections work. Kind if similar to setting up a really big euro rack design if you will (but then in software). I might note, there are a limited group of people who really desire this workflow because it requires a different thought process to make things work (in case you have ever used a different DAW like FL Studio or Ableton live then this workflow might be totally new to you).

The Hybrid method

The hybrid method is what most advanced people will do when they are familiar with a Digital Audio Workstation. Since this workflow will require some knowledge of sound design while at the same time have knowledge of how to make a sequence work. In theory, this workflow is all about hopping back and forth from the Rack (F-6) to the Sequencer (F-7). What most people will do is go to the Reason Rack first, lay down some sounds in the Rack itself and then go to the sequencer to lay down the sequence on how it should be playing like and last go back to the Rack to make the adjustments on the sound itself (and maybe go back to the sequencer to automate them)

Live oriented

This is a whole different method to use Reason since it all depends on the hardware interfaces you will be using. A hardware interface could be a Midi Keyboard, some AKAI MPC type of input device or a Behringer Mixing console. It is hard to explain in a few words how to do live performance setups in Reason while writing it down into a text. Since these things can either be rather simple to set up, or very specific to the performers needs. This is why we won't be looking into this in this article (since it is an introduction article to Propellerhead Reason).

When you first will be using Reason, the main focus would most likely be the Sequencer. Sometimes you may need to refer to the rack to adjust certain settings. The mixer console does not require that much attention in the first days. Since the mixer console is mostly about mixing, and this is a whole different process we have addressed a few years ago with our mixing guide.

While the browser window does provide enough needs on how to access certain instruments (for instance, if you need a string then you hit F4 to open the browser windows, then type in the word string in the search box and you'll see different patches for strings). There comes a time that you start looking for the factory sound bank and raise a few eyebrows on how this is set up.

So, we'll address this in the next few paragraphs

The factory soundbank in Reason

The factory soundbank since Reason 9 has been divided into 3 main categories [note: with the introduction of Reason 10 the soundbank has been expanded with 3Gb of sounds.]. In theory, you will have the following banks:

- The factory soundbank
- The Orkestral bank
- The Reason 9 sounds

The way that these banks are setup are quite different from one to each other. Let me try to explain them.

The Factory Soundbank

Inside the factory soundbank, you will find the layout per device. So, in this case, you'll have a line up for the following:

  • redrum drum computer
  • kong drum designer
  • thor the polysonic synthesizer
  • subtractor analog synth
  • malstrom graintable synthesizer
  • nn-xt patches
  • nn19 patches
  • rv7000 patches
  • combinator patches

and so on. In theory, it helps to understand what the main instruments in Reason do for you. Since a lot will be depending on the type of sound you will get from those. While original this sound bank is dated from Reason 1 till reason 8 (there have been many minor updates) they still sound on par what reason native devices will be able to do and sound like. I will be going a bit more in-depth about the different devices later on.

With Reason 10 there are a few new devices introduced like: Europa shapeshifter and grains.

The Reason 10 sounds

These sounds were introduced since the launch of Reason 9 (and extended with Reason 10). They contain a wide variety of patches where they are divided into categories on what you would expect from the sounds. This includes:

  • basses
  • leads
  • pads
  • rhythmic patches
  • plucked sounds

and so on.

The orkestral soundbank

This bank is a bit special in a way since it is all designed for one specific instrument that sits inside Propellerhead Reason and that will be the NN-XT advanced sampler. Originally this bank was introduced during the days of Reason 2.5. The patches play a wide variety of different orchestral sounds such as violins, viola, flutes, oboe and percussion elements. While this bank is pretty much outdated and there are a lot of different free alternatives such as the VSCO Community edition and if you want to build your own custom sets you can grab the individual sounds from Github.

Alternative methods for getting orchestral sounds come in forms of Rack Extensions or VST instruments. The most popular way seems to be Kontakt for building big sets.

From a starting point of view, the orkestral soundbank contains most of the sounds you will be needing to make orchestral set pieces. They contain most of the important sounds to get started and they organized pretty well regarding play style and method (staccato, forte, mezzo forte, piano etc).

Device in Reason 10 Quick guide

In a glance, the devices in Reason have a specific goal and play style. In the next few paragraphs, I would like to touch certain elements and explain what I mean by this. While most of the stuff you will find in this article is well addressed in the Reason 10 manual. While in 'general' terms speaking there are devices for specific tasks.

Drums and percussion

To create beats in Reason 10 there is Kong the Drum designer and the Redrum Drum computer. Originally Kong was modeled after an MPC look and feel while the Redrum is a nod towards sequencing in an TR-808 or TR-909 way. Both have their applicable usages while they are two different things at the same time.

Kong supports up to 16 different sounds per device (an MPC uses 16 pads) while the Redrum allows you to load 10 sounds at the same time per device. Another major difference here (but this is more the technical background) is that the Redrum only allows samples by default while Kong can contain Rex loops, synth drums, and physical modeled drums.

Most people prefer to use Kong in many cases because of its alignment with the MPC play style. The buttons are velocity sensitive while vs the Redrum (using the step sequencer) it is limited to playing notes hard, medium and soft. Other may prefer the Redrum drum computer because of the step sequencer that is integrated.

While this is just a beginners guide to get you going, I might say there is much more to this than just playing sounds in a sequence. On this website, you can find some more complicated methods using the Redrum drum computer or the Kong Drum designer.

Synthesizers

Inside Propellerhead reason there different synthesizers for different tasks. Just to list these real quick:

  • The Subtractor Analog Synthesizer - a basic subtractive synthesizer
  • Malstrom Graintable synthesizer - a wavetable synthesizer using grain tables at its core
  • Thor Polysonic synthesizer - a workhorse of a synth
  • Europa Shapeshifting synthesizer - a wavetable engine with additive synthesis

As the descriptions (which I have put next to the synthesizer) is the most of these do differently for specific tasks. The way that these are controlled is slightly different from one to another just because of the technique that defines these synthesizers. Europa is a more moderns synth, while the subtractor is an old school synthesizer (yet still has its charm because of it).

When it comes to getting started with these you could load them up and browse some of the patches that come with it. You can learn to use them to look up what defines the synthesizer and how the oscillators relate to specific parameters. The easiest synth to understand is the subtractor. Since this synthesizer plays a specific waveform but can be modulated in different ways. The malstrom is most likely (until this day) the hardest synth to understand how it works. If you want flexibility (trying to create almost every synthesized sound that is out there) a good tip would be looking at Thor and Europa.

Not so long ago I did a workshop on how to deal with Thor the Polysonic Synthesizer. While writing this article I am still in the process to make 6 or so episodes about Europa.

Samplers

Reason 10 contains (by looking at the stock devices only) 3 different type of samplers:

  • the NN-19 - a very basic sampler
  • the NN-XT - an extended sampler ideal for layering samples
  • Grains - a granular sampler manipulator

At some level, you can also see that the Redrum drum computer and the Kong Drum designer falls under this. While these two are more meant to play single hits in a sequence, these two are not very ideal into making complete polyphonic sequences using samples.

The NN-19 sampler is a traditional sampler that is most often used to capture a sound and play it back on a sequencer. It has basic features for splitting different keys into ranges (so you can play specific samples on a specific key). It contains filters and envelopes. At some level of degree, you can see the NN-19 as an advanced method of playing one single channel of a Redrum Drum computer (as we discussed on this website: NN-19 basics)

The thing where the NN-19 kind of shines is the layout (every control can be easily accessed) and the simplistic way it works. It is an easy to understand sampler. You can load up a wave file and throw it on the sequencer and play it back. It is that 'simple'.

Things where the NN-19 kind of fails for most users: the lack of layering sounds and pitch shifting.

The NN-XT in Reason 10 allows you to layer samples in a wide variety of ways:

  • set keyboard ranges per sample
  • use different velocity settings per sample
  • use round-robin tables (random pick a note)
  • define zones (group samples together)

Why by default the NN-XT will look like a traditional synthesizer. You have basic controls for Filter Frequency, Filter Resonance, amplitude Attack, Decay, and release. The NN-XT does have an 'under the hood' button. This is done by clicking on the 'remote editor arrow' located on the left side.

If you are using patches, you barely have to use this screen at all. If you want to manually layer your own samples, then this section is quite useful. An alternative method to layer samples would be using a Combinator (more on this later).

Grains is a more complicated sampler. It is very good at turning an existing sample and turns it into something creative. It has a similar trade as the NN-19 since it can play only one sample at the same time. While using the combinator you can layer different sounds at the same time.

The following video from Propellerhead might shed some light on the subject.

Cards With Sound

Playable instruments

While reason 10 has a wide range of different sounds in the factory sound bank, where most of the playable instruments are using the NN-19 or NN-XT samplers. There are some devices in Reason 10 that can be used as instant instruments that you can just load up and play. These devices are:

  • ID8 player
  • Klang
  • Pengea
  • Humana
  • Radical Piano

Effects in Reason 10

Propellerhead Sound Cards App

While there are two different type of effects that can be used in Reason: send effects and insert effect. There is a big difference between the two. I have made an in-depth article about the differences: send vs insert effects

While effects are mostly done inside the Rack window (F6). The send effects are usually placed under the master section:

To enable send effect you can find the settings back inside the Mixer window (F5).

To enable the effect on a specific channel, enable the Send button (which is marked by 1 till 8). The level will determine the amount of the effect that is being applied.

Sound Cards Creative

To add an effect to the send chain, you can drag the effect from the browser section and drag the effect under the mastering section.

Insert effects are effects that are created on every instrument. Above each instrument, you can see a 'mix' channel strip. The mix channel can contain a series of effect (depending on how you set it up).

In most cases the following rule of thumb is applied

Send effects usually are set to Wet (in case the effect has a dry/wet balance knob). The Level on the mixer will then determine how much of the effect is applied
Insert effects will be used more towards dry. The Dry/wet balance setting will then determine the level of the effect. The more the setting is set to wet, the more the effect is being applied

The combinator

The combinator is a utility that is designed to merge different instruments or effect in one single device. A combinator can be created from the ground up or you have the option to browse different patches from the Factory Soundbank. While creating combinators may look like they require a lot of skills to handle them. While it totally depends on how complicated you want to make them.

From a beginners point of view and easy 2 synths layered sound could start with the following set of instructions:

  • Create a combinator
  • Create a Utility > Mixer 6:2
  • Create an Instrument > Subtractor
  • Create an Instrument > Malstrom

In this case, you have created a two-layered instrument that consists of a Subtractor and Malstrom.

While this is one of the basic features of the Combinator, you can take things very far with setups like these. Just to name one feature: show programmer. Like the NN-XT this is the under the hood of what defines the different keyboard ranges to create sounds etc.

Propellerhead Sound Cards & Media Devices Driver Download

Players in Reason 10

In reason 10 you can spot something that is called 'players'. Players are handy devices to get quick access to arpeggiators and chord players. Without knowing anything about music theory and such, these tools just make life a tiny beat easier. A player is normally plugged above the instrument you want to 'play'.

Propellerhead Sound Cards For Kids

If you want to be on the creative side, here are a few ideas that come to mind

In conclusion

Computer Sound Cards

While this is all a beginners guide, I hope this information will provide enough info to get started with Propellerhead Reason 10.


Written by hydlide
Published: 2017-12-03