Web midi keyboard

Web midi keyboard DEFAULT

MIDI Tutorial: Creating Browser-Based Audio Applications Controlled by MIDI Hardware

While the Web Audio API is increasing in popularity, especially among HTML5 game developers, the Web MIDI API is still little known among frontend developers. A big part of this probably has to do with its current lack of support and accessible documentation; the Web MIDI API is currently only supported in Google Chrome, granted that you enable a special flag for it. Browser manufacturers currently put little emphasis on this API, as it is planned to be part of ES7 standard.

Designed in the early 80’s by several music industry representatives, MIDI (short for Musical Instrument Digital Interface), is a standard communication protocol for electronic music devices. Even though other protocols, such as OSC, have been developed since then; thirty years later, MIDI is still the de-facto communication protocol for audio hardware manufacturers. You will be hard-pressed to find a modern music producer that does not own at least one MIDI device in his studio.

With the fast development and adoption of the Web Audio API, we can now start building browser-based applications that bridge the gap between the cloud and the physical world. Not only does the Web MIDI API allows us to build synthesizers and audio effects, but we can even start building browser-based DAW (Digital Audio Workstation) similar in feature and performance to their current flash-based counterparts (check out Audiotool, for example).

In this MIDI tutorial, I will guide you through the basics of the Web MIDI API, and we will build a simple monosynth that you will be able to play with your favorite MIDI device. The full source code is available here, and you can test the live demo directly. If you do not own a MIDI device, you can still follow this tutorial by checking out the ‘keyboard’ branch of the GitHub repository, which enables basic support for your computer keyboard, so you can play notes and change octaves. This is also the version that is available as the live demo. However, due to limitations of the computer hardware, velocity and detune are both disabled whenever you use your computer keyboard to control the synthesizer. Please refer to the readme file on GitHub to read about the key/note mapping.

Toptal's midi tutorial

Midi Tutorial Prerequisites

You will need the following to for this MIDI tutorial:

  • Google Chrome (version 38 or above) with the flag enabled
  • (Optionally) A MIDI device, that can trigger notes, connected to your computer

We will also be using Angular.js to bring a bit of structure to our application; therefore, basic knowledge of the framework is a prerequisite.

Getting Started

We will modularize our MIDI application from the ground up by separating it into 3 modules:

  • WebMIDI: handling the various MIDI devices connected to your computer
  • WebAudio: providing the audio source for our synth
  • WebSynth: connecting the web interface to the audio engine

An module will handle the user interaction with the web user interface. Our application structure could look a bit like this:

You should also install the following libraries to help you build up your application: Angular.js, Bootstrap, and jQuery. Probably the easiest way to install these is via Bower.

The WebMIDI Module: Connecting with the Real World

Let’s start figuring out how to use MIDI by connecting our MIDI devices to our application. To do so, we will create a simple factory returning a single method. To connect to our MIDI devices via the Web MIDI API, we need to call the method:

And that’s pretty much it!

The method returns a promise, so we can just return it directly and handle the result of the promise in our app’s controller:

As mentioned, the method returns a promise, passing an object to the method, with two properties: inputs and outputs.

In earlier versions of Chrome, these two properties were methods allowing you to retrieve an array of input and output devices directly. However, in the latest updates, these properties are now objects. This makes quite a difference, since we now need to call the method on either the inputs or outputs object to retrieve the corresponding list of devices. This method acts as a generator function, and returns an iterator. Again, this API is meant to be part of ES7; therefore, implementing generator-like behavior makes sense, even though it is not as straight-forward as the original implementation.

Finally, we can retrieve the number of devices via the property of the iterator object. If there is at least one device, we simply iterate over the result by calling the method of the iterator object, and pushing each device to an array defined on the $scope. On the front-end, we can implement a simple select box which will list all the available input devices and let us choose which device we want to use as the active device to control the web synth:

We bound this select box to a $scope variable called which we will later use to connect this active device to the synth.

connect this active device to the synth

The WebAudio Module: Making Noise

The WebAudio API allows us to not only play sound files, but also generate sounds by recreating the essential components of synthesizers such as oscillators, filters, and gain nodes amongst others.

Create an Oscillator

The role of oscillators is to output a waveform. There are various types of waveforms, amongst which four are supported in the WebAudio API: sine, square, triangle and sawtooth. Wave forms are said to “oscillate” at a certain frequency, but it is also possible for one to define their own custom wavetable if needed. A certain range of frequencies are audible by human beings - they are known as sounds. Alternatively, when they are oscillating at low frequencies, oscillators can also help us build LFO’s (“low frequency oscillator”) so we can modulate our sounds (but that is beyond the scope of this tutorial).

The first thing we need to do to create some sound is to instantiate a new :

From there, we can instantiate any of the components made available by the WebAudio API. Since we might create multiple instances of each component, it makes sense to create services to be able to create new, unique instances of the components we need. Let’s start by creating the service to generate a new oscillator:

We can now instantiate new oscillators at our will, passing as an argument the AudioContext instance we created earlier. To make things easier down the road, we will add some wrapper methods - mere syntactic sugar - and return the Oscillator function:

Create a Multipass Filter and a Volume Control

We need two more components to complete our basic audio engine: a multipass filter, to give a bit of shape to our sound, and a gain node to control the volume of our sound and turn the volume on and off. To do so, we can proceed in the same way we did for the oscillator: create services returning a function with some wrapper methods. All we need to do is provide the AudioContext instance and call the appropriate method.

We create a filter by calling the method of the AudioContext instance:

Similarly, for a gain node, we call the method:

The WebSynth Module: Wiring Things Up

Now we are almost ready to build our synth interface and connect MIDI devices to our audio source. First, we need to connect our audio engine together and get it ready to receive MIDI notes. To connect the audio engine, we simply create new instances of the components that we need, and then “connect” them together using the method available for each components’ instances. The method takes one argument, which is simply the component you want to connect the current instance to. It is possible to orchestrate a more elaborate chain of components as the method can connect one node to multiple modulators (making it possible to implement things like cross-fading and more).

We just built the internal wiring of our audio engine. You can play around a bit and try different combinations of wiring, but remember to turn down the volume to avoid becoming deaf. Now we can hook up the MIDI interface to our application and send MIDI messages to the audio engine. We will setup a watcher on the device select box to virtually “plug” it into our synth. We will then listen to MIDI messages coming from the device, and pass the information to the audio engine:

Here, we are listening to MIDI events from the device, analysing the data from the MidiEvent Object, and passing it to the appropriate method; either or , based on the event code (144 for noteOn, 128 for noteOff). We can now add the logic in the respective methods in the audio module to actually generate a sound:

A few things are happening here. In the method, we first push the current note to an array of notes. Even though we are building a monosynth (meaning we can only play one note at a time), we can still have several fingers at once on the keyboard. So, we need to queue all theses notes so that when we release one note, the next one is played. We then need to stop the oscillator to assign the new frequency, which we convert from a MIDI note (scale from 0 to 127) to an actual frequency value with a bit of math:

In the method, we first start by finding the note in the array of active notes and removing it. Then, if it was the only note in the array, we simply turn off the volume.

The second argument of the method is the transition time, meaning how long it takes the gain to reach the new volume value. In musical terms, if the note is on, it would be the equivalent of the attack time, and if the note is off, it is the equivalent of the release time.

The WebAnalyser Module: Visualising our Sound

Another interesting feature we can add to our synth is an analyser node, which allows us to display the waveform of our sound using canvas to render it. Creating an analyser node is a bit more complicated than other AudioContext objects, as it requires to also create a scriptProcessor node to actually perform the analysis. We start by selecting the canvas element on the DOM:

Then, we add a method, in which we will create both the analyser and the script processor:

First, we create a scriptProcessor object and connect it to the destination. Then, we create the analyser itself, which we feed with the audio output from the oscillator or filter. Notice how we still need to connect the audio output to the destination so we can hear it! We also need to define the gradient colors of our graph - this is done by calling the method of the canvas element.

Finally, the scriptProcessor will fire an ‘audioprocess’ event on an interval; when this event is fired, we calculate the average frequencies captured by the analyser, clear the canvas, and redraw the new frequency graph by calling the method:

Last but not least, we will need to modify the wiring of our audio engine a bit to accommodate this new component:

We now have a nice visualiser which allows us to display the waveform of our synth in real time! This involves a bit of a work to setup, but it’s very interesting and insightful, especially when using filters.

Building Up on our Synth: Adding Velocity & Detune

At this point in our MIDI tutorial we have a pretty cool synth - but it plays every note at the same volume. This is because instead of handling the velocity data properly, we simply set the volume to a fixed value of 1.0. Let’s start by fixing that, and then we will see how we can enable the detune wheel that you find on most common MIDI keyboards.

Enabling Velocity

If you are unfamiliar with it, the ‘velocity’ relates to how hard you hit the key on your keyboard. Based on this value, the sound created seems either softer or louder.

In our MIDI tutorial synth, we can emulate this behavior by simply playing with the volume of the gain node. To do so, we first need to do a bit of math to convert the MIDI data into a float value between 0.0 and 1.0 to pass to the gain node:

The velocity range of a MIDI device is from 0 to 127, so we simply divide that value by 127 and return a float value with two decimals. Then, we can update the method to pass the calculated value to the gain node:

And that’s it! Now, when we play our synth, we will notice the volumes vary based on how hard we hit the keys on our keyboard.

Enabling the Detune Wheel on your MIDI Keyboard

Most MIDI keyboards feature a detune wheel; the wheel allows you to slightly alter the frequency of the note currently being played, creating an interesting effect known as ‘detune’. This is fairly easy to implement as you learn how to use MIDI, since the detune wheel also fires a MidiMessage event with its own event code (224), which we can listen to and act upon by recalculating the frequency value and updating the oscillator.

First, we need to catch the event in our synth. To do so, we add an extra case to the switch statement we created in the callback:

Then, we define the method on the audio engine:

The default detune value is 64, which means there is no detune applied, so in this case we simply pass the current frequency to the oscillator.

Finally, we also need to update the method, to take the detune into consideration in case another note is queued:

Creating the Interface

So far, we only created a select box to be able to select our MIDI device and a wave form visualiser, but we have no possibility to modify the sound directly by interacting with the web page. Let’s create a very simple interface using common form elements, and bind them to our audio engine.

Creating a Layout for the Interface

We will create various form elements to control the sound of our synth:

  • A radio group to select the oscillator type
  • A checkbox to enable / disable the filter
  • A radio group to select the filter type
  • Two ranges to control the filter’s frequency and resonance
  • Two ranges to control the attack and release of the gain node

Creating an HTML document for our interface, we should end up with something like this:

Decorating the user interface to look fancy is not something I will cover in this basic MIDI tutorial; instead we can save it as an exercise for later to polish the user interface, perhaps to look something like this:

polished midi user interface

Binding the Interface to the Audio Engine

We should define a few methods to bind these controls to our audio engine.

Controlling the Oscillator

For the oscillator, we only need a method allowing us to set the oscillator type:

Controlling the Filter

For the filter, we need three controls: one for the filter type, one for the frequency and one for the resonance. We can also connect the and methods to the value of the checkbox.

Controlling the Attack and Resonance

To shape our sound a bit, we can change the attack and release parameters of the gain node. We need two methods for this:

Setting Up Watchers

Finally, in our app’s controller, we only need to setup a few watchers and bind them to the various methods we just created:

Conclusion

A lot of concepts were covered in this MIDI tutorial; mostly, we discovered how to use WebMIDI API, which is fairly undocumented apart from the official specification from the W3C. The Google Chrome implementation is pretty straight forward, although the switch to an iterator object for the input and output devices requires a bit of refactoring for legacy code using the old implementation.

As for the WebAudio API, this is a very rich API, and we only covered a few of its capabilities in this tutorial. Unlike the WebMIDI API, the WebAudio API is very well documented, in particular on the Mozilla Developer Network. The Mozilla Developer Network contains a plethora of code examples and detailed lists of the various arguments and events for each component, which will help you implement your own custom browser-based audio applications.

As both API’s continue to grow, it will open some very interesting possibilities for JavaScript developers; allowing us to develop fully-featured, browser-based, DAW that will be able to compete with their Flash equivalents. And for desktop developers, you can also start creating your own cross-platform applications using tools such as node-webkit. Hopefully, this will spawn a new generation of music tools for audiophiles that will empower users by bridging the gap between the physical world and the cloud.

Sours: https://www.toptal.com/web/creating-browser-based-audio-applications-controlled-by-midi-hardware

Abstract

Some user agents have music devices, such as synthesizers, keyboard and other controllers, and drum machines connected to their host computer or device. The widely adopted Musical Instrument Digital Interface (MIDI) protocol enables electronic musical instruments, controllers and computers to communicate and synchronize with each other. MIDI does not transmit audio signals: instead, it sends event messages about musical notes, controller signals for parameters such as volume, vibrato and panning, cues and clock signals to set the tempo, and system-specific MIDI communications (e.g. to remotely store synthesizer-specific patch data). This same protocol has become a standard for non-musical uses, such as show control, lighting and special effects control.

This specification defines an API supporting the MIDI protocol, enabling web applications to enumerate and select MIDI input and output devices on the client system and send and receive MIDI messages. It is intended to enable non-music MIDI applications as well as music ones, by providing low-level access to the MIDI devices available on the users' systems. The Web MIDI API is not intended to describe music or controller inputs semantically; it is designed to expose the mechanics of MIDI input and output interfaces, and the practical aspects of sending and receiving MIDI messages, without identifying what those actions might mean semantically (e.g., in terms of "modulate the vibrato by 20Hz" or "play a G#7 chord", other than in terms of changing a controller value or sending a set of note-on messages that happen to represent a G#7 chord).

To some users, "MIDI" has become synonymous with Standard MIDI Files and General MIDI. That is not the intent of this API; the use case of simply playing back a .SMF file is not within the purview of this specification (it could be considered a different format to be supported by the HTML5 element, for example). The Web MIDI API is intended to enable direct access to devices that respond to MIDI - external synthesizers or lighting systems, for example, or even the software synthesizers that are built in to many common operating systems. The Web MIDI API is also explicitly designed to enable a new class of applications on the web that can respond to MIDI controller inputs - using external hardware controllers with physical buttons, knobs and sliders (as well as musical controllers like keyboard, guitar or wind instrument controllers) to control web applications.

The Web MIDI API is also expected to be used in conjunction with other APIs and elements of the web platform, notably the Web Audio API. This API is also intended to be familiar to users of MIDI APIs on other systems, such as Apple's CoreMIDI and Microsoft's Windows MIDI API.

Status of This Document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

This document was published by the Audio Working Group as a Working Draft. This document is intended to become a W3C Recommendation. If you wish to make comments regarding this document, please send them to [email protected] (subscribe, archives). All comments are welcome.

Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

This document is governed by the 1 August 2014 W3C Process Document.

Table of Contents

1. Introduction

This section is non-normative.

The Web MIDI API specification defines a means for web developers to enumerate, manipulate and access MIDI devices - for example interfaces that may provide hardware MIDI ports with other devices plugged in to them and USB devices that support the USB-MIDI specification. Having a Web API for MIDI enables web applications that use existing software and hardware synthesizers, hardware music controllers and light systems and other mechanical apparatus controlled by MIDI. This API has been defined with this wide variety of use cases in mind.

The approaches taken by this API are similar to those taken in Apple's CoreMIDI API and Microsoft's Windows MIDI API; that is, the API is designed to represent the low-level software protocol of MIDI, in order to enable developers to build powerful MIDI software on top. The API enables the developer to enumerate input and output interfaces, and send and receive MIDI messages, but (similar to the aforementioned APIs) it does not attempt to semantically define or interpret MIDI messages beyond what is necessary to robustly support current devices.

The Web MIDI API is not intended to directly implement high-level concepts such as sequencing; it does not directly support Standard MIDI Files, for example, although a Standard MIDI File player can be built on top of the Web MIDI API. It is also not intended to semantically capture patches or controller assignments, as General MIDI does; such interpretation is outside the scope of the Web MIDI API (though again, General MIDI can easily be utilized through the Web MIDI API).

2. Conformance

As well as sections marked as non-normative, all authoring guidelines, diagrams, examples, and notes in this specification are non-normative. Everything else in this specification is normative.

The key words MUST and SHOULD are to be interpreted as described in [RFC2119].

This specification defines conformance criteria that apply to a single product: the that implements the interfaces that it contains.

Implementations that use ECMAScript to implement the APIs defined in this specification MUST implement them in a manner consistent with the ECMAScript Bindings defined in the Web IDL specification [WEBIDL], as this specification uses that specification and terminology.

3. Terminology

The concepts and are defined in [HTML5].

The terms and and corresponding interface are defined in [HTML5].

The interface is defined in [TYPED-ARRAYS].

The term is defined in [WEBIDL].

The and its associated interfaces and concepts are defined in [webaudio].

The interface is defined in [DOM4].

The interface is defined in [DOM-LEVEL-3-CORE].

The interface is defined in [HIGHRES-TIME].

The terms , , , , , , and are defined in [MIDI].

The interface is currently defined in the WHATWG DOM specification.

4. Obtaining Access to MIDI Devices

4.1 requestMIDIAccess()

partial interface Navigator { Promise<>requestMIDIAccess (optional options); };

4.1.1 Methods

When invoked, returns a Promise object representing a request for access to MIDI devices on the user's system.

Requesting MIDI access SHOULD prompt the user for access to MIDI devices, particularly if system exclusive access is requested. In some scenarios, this permission may have already been implicitly or explicitly granted, in which case this prompt may not appear. If the user gives express permission or the call is otherwise approved, the vended Promise's is invoked, as a (i.e., with a object and a object as its arguments. The underlying system may choose to allow the user to select specific MIDI interfaces to expose to this API (i.e. pick and choose interfaces on an individual basis), although this is not required. The system may also choose to prompt (or not) based on whether system exclusive support is requested, as system exclusive has greater privacy and security implications.

If the user declines or the call is denied for any other reason, the Promise's (if any) is invoked with a parameter.

When the method is called, the user agent MUST run the :

  1. Let be a new Promise object and be its associated resolver.

  2. Return and run the following steps asynchronously.

  3. Optionally, e.g. based on a previously-established user preference, for security reasons, or due to platform limitations, jump to the step labeled failure below.

  4. Optionally, e.g. based on a previously-established user preference, jump to the step labeled success below.

  5. Prompt the user in a user-agent-specific manner for permission to provide the entry script's origin with a object representing control over user's MIDI devices. This prompt may be contingent upon whether system exclusive support was requested, and may allow the user to enable or disable that access.

    If permission is denied, jump to the step labeled failure below. If the user never responds, this algorithm will never progress beyond this step. If permission is granted, continue the following steps.

  6. success: Let be a new object. (It is possible to call requestMIDIAccess() multiple times; this may prompt the user multiple times, so it may not be best practice, and the same instance of MIDIAccess will not be returned each time.)

  7. Call 's method with as value argument.

  8. Terminate these steps.

  9. failure: Let be a new . This exception's .name should be if the user or their security settings denied the application from creating a MIDIAccess instance with the requested options, if the page is going to be closed for a user navigation, if the underlying systems raise any errors, or otherwise it should be .

  10. Call 's method with as value argument.

ParameterTypeNullableOptionalDescription
options

Return type:

4.2 dictionary

This dictionary contains optional settings that may be provided to the requestMIDIAccess request.

dictionary MIDIOptions { booleansysex; };

4.2.1 Dictionary Members

of type boolean

This member informs the system whether the ability to send and receive system exclusive messages is requested or allowed on a given object. On the option passed to , if this member is set to true, but system exclusive support is denied (either by policy or by user action), the access request will fail with a error. If this support is not requested (and allowed), the system will throw exceptions if the user tries to send system exclusive messages, and will silently mask out any system exclusive messages received on the port.

In the parameter passed to the resolveCallback, this member indicates whether system exclusive is allowed on the MIDIAccess.

4.3 Interface

interface MIDIInputMap { readonly maplike<DOMString, >; };

4.3.1 Maplike

This is a maplike interface whose value is a MIDIInput instance and key is its ID.

This type is used to represent all the currently available MIDI input ports. This enables

// to tell how many entries there are: var numberOfMIDIInputs = inputs.size; // add each of the ports to a <select> box inputs.forEach( function( key, port ) { var opt = document.createElement("option"); opt.text = port.name; document.getElementById("inputportselector").add(opt); }); // or you could express in ECMAScript 6 as: for (let input of inputs.values()) { var opt = document.createElement("option"); opt.text = input.name; document.getElementById("inputportselector").add(opt); }

4.4 Interface

interface MIDIOutputMap { readonly maplike<DOMString, >; };

4.4.1 Maplike

This is a maplike interface whose value is a MIDIOutput instance and key is its ID.

This type is used to represent all the currently available MIDI output ports. This enables

// to tell how many entries there are: var numberOfMIDIOutputs = outputs.size; // add each of the ports to a <select> box outputs.forEach( function( key, port ) { var opt = document.createElement("option"); opt.text = port.name; document.getElementById("outputportselector").add(opt); }); // or you could express in ECMAScript 6 as: for (output of outputs.values()) { var opt = document.createElement("option"); opt.text = input.name; document.getElementById("inputportselector").add(opt); }

4.5

callback MIDISuccessCallback = void (access, options);

4.5.1 Callback Parameters

of type

A object created to provide script access to the user's MIDI devices. This object is used to enumerate and obtain access to individual MIDI devices.

Note: The term "MIDI device" in this specification refers to a MIDI interface available to the host system; for example, if a hardware MIDI adapter is connected to the host system, it will be enumerated as a single device, even if several MIDI-supporting devices (such as synthesizers or drum machines) are plugged into hardware MIDI ports on the adapter.

of type

This parameter describes the options enabled on this object.

5. Interface

This interface provides the methods to list MIDI input and output devices, and obtain access to an individual device.

interface MIDIAccess : EventTarget { readonly attribute inputs; readonly attribute outputs; attribute EventHandleronstatechange; readonly attribute booleansysexEnabled; };

5.1 Attributes

of type , readonly
The MIDI input ports available to the system.
of type EventHandler,

The handler called when a new port is connected or an existing port changes the state attribute.

This event handler, of type , MUST be supported by all objects implementing the interface.

Whenever a previously unavailable MIDI port becomes available for use, or an existing port changes the state attribute, the user agent SHOULD run the following steps:

  1. Let be the corresponding to the newly-available, or the existing port.
  2. Let be a newly constructed , with the attribute set to the port.
  3. Fire an event named at the , using the as the event object.
of type , readonly
The MIDI output ports available to the system.
of type boolean, readonly
This attribute informs the user whether system exclusive support is enabled on this MIDIAccess.

6. Interface

This interface represents a MIDI input or output port.

enum MIDIPortType { "input", "output" };
Enumeration description
If a MIDIPort is an input port, the type member MUST be this value.
If a MIDIPort is an output port, the type member MUST be this value.
enum MIDIPortDeviceState { "disconnected", "connected" };
Enumeration description
The device that MIDIPort represents is disconnected from the system. When a device is disconnected from the system, it should not appear in the relevant map of input and output ports.
The device that MIDIPort represents is connected, and should appear in the map of input and output ports.
enum MIDIPortConnectionState { "open", "closed", "pending" };
Enumeration description
The device that MIDIPort represents has been opened (either implicitly or explicitly) and is available for use.
The device that MIDIPort represents has not been opened, or has been explicitly closed. Until a MIDIPort has been opened either explicitly (through ) or implicitly (by adding a midimessage event handler on an input port, or calling send() on an output port, this should be the default state of the device.
The device that MIDIPort represents has been opened (either implicitly or explicitly), but the device has subsequently been disconnected and is unavailable for use. If the device is reconnected, prior to sending a event, the system should attempt to reopen the device (following the algorithm to open a MIDIPort); this will result in either the connection state transitioning to "open" or to "closed".
interface MIDIPort : EventTarget { readonly attribute DOMStringid; readonly attribute DOMString?manufacturer; readonly attribute DOMString?name; readonly attribute type; readonly attribute DOMString?version; readonly attribute state; readonly attribute connection; attribute EventHandleronstatechange;Promise<>open ();Promise<>close (); };

6.1 Attributes

of type , readonly
The state of the connection to the device.
of type DOMString, readonly

A unique ID of the port. This can be used by developers to remember ports the user has chosen for their application. The User Agent MUST ensure that the is unique to only that port. The User Agent SHOULD ensure that the id is maintained across instances of the application - e.g., when the system is rebooted - and when a device is removed from the system. Applications may want to cache these ids locally to re-create a MIDI setup. Some systems may not support completely unique persistent identifiers; in such cases, it will be more challenging to maintain identifiers when another interface is added or removed from the system. (This might throw off the index of the requested port.) It is expected that the system will do the best it can to match a port across instances of the MIDI API: for example, an implementation may opaquely use some form of hash of the port interface manufacturer, name and index as the id, so that a reference to that port id is likely to match the port when plugged in. Applications may use the comparison of id of MIDIPorts to test for equality.

of type DOMString, readonly , nullable

The manufacturer of the port.

of type DOMString, readonly , nullable

The system name of the port.

of type EventHandler,

The handler called when an existing port changes its state or connection attributes.

This event handler, of type , MUST be supported by all objects implementing interface.

of type , readonly
The state of the device.
of type , readonly

A descriptor property to distinguish whether the port is an input or an output port. For , this MUST be . For , this MUST be .

of type DOMString, readonly , nullable

The version of the port.

6.2 Methods

Makes the MIDI device corresponding to the explicitly unavailable (subsequently changing the state from "open" to "connected"). Note that successful invocation of this method will result in MIDI messages no longer being delivered to MIDIMessageEvent handlers on a MIDIInputPort (although setting a new handler will cause an implicit open()).

The underlying implementation may not need to do anything in response to this call. However, some underlying implementations may not be able to support shared access to MIDI devices, and the explicit close() call enables MIDI applications to ensure other applications can gain access to devices.

When invoked, this method returns a Promise object representing a request for access to the given MIDI port on the user's system. When the port has been closed (and therefore, in exclusive access systems, the port is available to other applications), the vended Promise's is invoked with the object as its argument. If the port is disconnected, the Promise's (if any) is invoked.

When the method is called, the user agent MUST run the :

  1. Let be a new Promise object and be its associated resolver.

  2. Return and run the following steps asynchronously.

  3. Let be the given object.

  4. If the port is already closed (its is - e.g. the port has not yet been implicitly or explictly opened, or has already been called on this ), jump to the step labeled closed below.

  5. If the port is an input port, skip to the next step. If the output port's is not , clear all pending send data and skip to the next step. Clear any pending send data in the system with timestamps in the future, then finish sending any send messages with no timestamp or with a timestamp in the past or present, prior to proceeding to the next step.

  6. Close access to the port in the underlying system if open, and release any blocking resources in the underlying system.

  7. success: Change the attribute of the MIDIPort to , and enqueue a new to the handler of the and to the handler of the .

  8. closed: Call 's method with as value argument.

  9. Terminate these steps.

  10. failure: Let be a new . This exception's .name should be if the port is disconnected.

  11. Call 's method with as value argument.

No parameters.

Return type:

Makes the MIDI device corresponding to the explicitly available. Note that this call is NOT required in order to use the - calling on a or attaching a MIDIMessageEvent handler on a MIDIInputPort will cause an implicit open(). The underlying implementation may not need to do anything in response to this call. However, some underlying implementations may not be able to support shared access to MIDI devices, so using explicit open() and close() calls will enable MIDI applications to predictably control this exclusive access to devices.

When invoked, this method returns a Promise object representing a request for access to the given MIDI port on the user's system.

If the port device has a state of , when access to the port has been obtained (and the port is ready for input or output), the vended Promise's is invoked with a object as its argument.

If access to a connected port is not available (for example, the port is already in use in an exclusive-access-only platform), the Promise's (if any) is invoked.

If is called on a port that is , the port's will transition to , until the port becomes or all references to it are dropped.

When this method is called, the user agent MUST run the :

  1. Let be a new Promise object and be its associated resolver.

  2. Return and run the following steps asynchronously.

  3. Let be the given object.

  4. If the device's state is already (e.g. open() has already been called on this MIDIPort, or the port has been implicitly opened), jump to the step labeled opened below.

  5. If the device's state is (i.e. the connection had been opened and the device was subsequently disconnected), jump to the step labeled opened below.

  6. If the device's state is , change the attribute of the to , and enqueue a new to the handler of the and to the handler of the and jump to the step labeled opened below.

  7. Attempt to obtain access to the given MIDI device in the system. If the device is unavailable (e.g. is already in use by another process and cannot be opened, or is disconnected), jump to the step labeled failure below. If the device available and access is obtained, continue the following steps.

  8. success: Change the attribute of the MIDIPort to , and enqueue a new to the handler of the and to the handler of the .

  9. If this port is an output port and has any pending data that is waiting to be sent, asynchronously begin sending that data.

  10. opened: Call 's method with as value argument.

  11. Terminate these steps.

  12. failure: Let be a new . This exception's .name should be if the port is unavailable, or if the port is disconnected.

  13. Call 's method with as value argument.

No parameters.

Return type:

Whenever the MIDI port corresponding to the changes the state attribute, the user agent SHOULD run the following steps:

  1. Let be the .

  2. Let be a newly constructed , with the attribute set to the port.

  3. Fire an event named at the , and at the , using the as the event object.

6.3 Interface

interface MIDIInput : { attribute EventHandleronmidimessage; };

6.3.1 Attributes

of type EventHandler,

This event handler, of type , MUST be supported by all objects implementing interface.

If the handler is set and the state attribute is not , underlying implementation tries to make the port available, and change the state attribute to . If succeeded, is delived to the corresponding and .

Whenever the MIDI port corresponding to the finishes receiving one or more MIDI messages, the user agent MUST run the following steps:

  1. Let be the .

  2. If the did not enable system exclusive access, and the message is a system exclusive message, abort this process.

  3. Let be a newly constructed , with the attribute set to the time the message was received by the system, and with the attribute set to a Uint8Array of MIDI data bytes representing a single MIDI message.

  4. Fire an event named at the , using the as the event object.

It is specifically noted that MIDI System Real-Time Messages may actually occur in the middle of other messages in the input stream; in this case, the System Real-Time messages will be dispatched as they occur, while the normal messages will be buffered until they are complete (and then dispatched).

6.4 Interface

interface MIDIOutput : { voidsend (sequence<octet>data, optional doubletimestamp);voidclear (); };

6.4.1 Methods

Clears any pending send data that has not yet been sent from the 's queue. The implementation will need to ensure the MIDI stream is left in a good state, so if the output port is in the middle of a sysex message, a sysex termination byte (0xf7) should be sent.

No parameters.

Return type:

Enqueues the message to be sent to the corresponding MIDI port. The underlying implementation will (if necessary) coerce each member of the sequence to an unsigned 8-bit integer. The use of sequence rather than a Uint8Array enables developers to use the convenience of rather than having to create a Uint8Array, e.g. - while still enabling use of Uint8Arrays for efficiency in large MIDI data scenarios (e.g. reading Standard MIDI Files and sending sysex messages).

The data contains one or more valid, complete MIDI messages. Running status is not allowed in the data, as underlying systems may not support it.

If is not a valid sequence or does not contain a valid MIDI message, throw a exception.

If is a system exclusive message, and the did not enable system exclusive access, throw an exception.

If the port is , throw an exception.

If the port is but the connection is , asynchronously try to open the port.

ParameterTypeNullableOptionalDescription
data The data to be enqueued, with each sequence entry representing a single byte of data.
timestamp The time at which to begin sending the data to the port (as a DOMHighResTimeStamp - a number of milliseconds measured relative to the navigation start of the document). If is not present or is set to zero (or another time in the past), the data is to be sent as soon as possible.

Return type:

7. Interface

An event object implementing this interface is passed to a MIDIInput's onmidimessage handler when MIDI messages are received.

[Constructor(DOMString type, optional MIDIMessageEventInit eventInitDict)] interface MIDIMessageEvent : Event { readonly attribute doublereceivedTime; readonly attribute Uint8Arraydata; };

7.1 Attributes

of type Uint8Array, readonly

A Uint8Array containing the MIDI data bytes of a single MIDI message.

of type double, readonly

A specifying when the event occurred.

Note

The DOM4 Event object has a timeStamp member in the event object that will be filled out with the current time, but that it is lower precision (DOMTimeStamp is defined as an integer number of milliseconds), has a different zero reference (DOMTimeSTamp is the number of milliseconds that has passed since 00:00:00 UTC on 1 January 1970), and therefore is less suitable for MIDI applications.

7.2 Interface

dictionary MIDIMessageEventInit : EventInit { doublereceivedTime;Uint8Arraydata; };

7.2.1 Dictionary Members

of type Uint8Array

A Uint8Array containing the MIDI data bytes of a single MIDI message.

of type double

A specifying when the event occurred.

8. Interface

An event object implementing this interface is passed to a MIDIAccess' onstatechange handler when a new port becomes available (for example, when a MIDI device is first plugged in to the computer), when a previously-available port becomes unavailable, or becomes available again (for example, when a MIDI interface is disconnected, then reconnected) and (if present) is also passed to the onstatechange handlers for any s referencing the port.

When a is in the state and the device is reconnected to the host system, prior to firing a event the MIDIPort open algorithm is run on it to attempt to reopen the port. If this transition fails (e.g. the Port is reserved by something else in the underlying system, and therefore unavailable for use), the connection state moves to "closed", else it transitions back to "open". This is done prior to the event for the device state change so that the event will reflect the final connection state as well as the device state.

Some underlying systems may not provide notification events for device connection status; such systems may have long time delays as they poll for new devices infrequently. As such, it is suggested that heavy reliance on connection events not be used.

[Constructor(DOMString type, optional MIDIConnectionEventInit eventInitDict)] interface MIDIConnectionEvent : Event { readonly attribute port; };

8.1 Attributes

of type , readonly

The port that has been connected or disconnected.

8.2 Interface

dictionary MIDIConnectionEventInit : EventInit { port; };

8.2.1 Dictionary Members

of type

The port that has been connected or disconnected.

9. Examples of Web MIDI API Usage in JavaScript

This section is non-normative.

The following are some examples of common MIDI usage in JavaScript.

9.1 Getting Access to the MIDI System

This example shows how to request access to the MIDI system.

var midi = null; // global MIDIAccess object function onMIDISuccess( midiAccess ) { console.log( "MIDI ready!" ); midi = midiAccess; // store in the global (in real usage, would probably keep in an object instance) } function onMIDIFailure(msg) { console.log( "Failed to get MIDI access - " + msg ); } navigator.requestMIDIAccess().then( onMIDISuccess, onMIDIFailure );

9.2 Requesting Access to the MIDI System with System Exclusive Support

This example shows how to request access to the MIDI system, including the ability to send and receive system exclusive messages.

var midi = null; // global MIDIAccess object function onMIDISuccess( midiAccess ) { console.log( "MIDI ready!" ); midi = midiAccess; // store in the global (in real usage, would probably keep in an object instance) } function onMIDIFailure(msg) { console.log( "Failed to get MIDI access - " + msg ); } navigator.requestMIDIAccess( { sysex: true } ).then( onMIDISuccess, onMIDIFailure );

9.3 Listing Inputs and Outputs

This example gets the list of the input and output ports and prints their information to the console log, using ES6 for...of notation.

function listInputsAndOutputs( midiAccess ) { for (var input in midiAccess.inputs) { console.log( "Input port [type:'" + input.type + "'] id:'" + input.id + "' manufacturer:'" + input.manufacturer + "' name:'" + input.name + "' version:'" + input.version + "'" ); } for (var output in midiAccess.outputs) { console.log( "Output port [type:'" + output.type + "'] id:'" + output.id + "' manufacturer:'" + output.manufacturer + "' name:'" + output.name + "' version:'" + output.version + "'" ); } }

9.4 Handling MIDI Input

This example prints incoming MIDI messages on a single arbitrary input port to the console log.

function onMIDIMessage( event ) { var str = "MIDI message received at timestamp " + event.timestamp + "[" + event.data.length + " bytes]: "; for (var i=0; i<event.data.length; i++) { str += "0x" + event.data[i].toString(16) + " "; } console.log( str ); } function startLoggingMIDIInput( midiAccess, indexOfPort ) { midiAccess.inputs.forEach( function(entry) {entry.value.onmidimessage = onMIDIMessage;}); }

9.5 Sending MIDI Messages to an Output Device

This example sends a middle C note on message immediately on MIDI channel 1 (MIDI channels are 0-indexed, but generally referred to as channels 1-16), and queues a corresponding note off message for 1 second later.

function sendMiddleC( midiAccess, portID ) { var noteOnMessage = [0x90, 60, 0x7f]; // note on, middle C, full velocity var output = midiAccess.outputs.get(portID); output.send( noteOnMessage ); //omitting the timestamp means send immediately. output.send( [0x80, 60, 0x40], window.performance.now() + 1000.0 ); // Inlined array creation- note off, middle C, // release velocity = 64, timestamp = now + 1000ms. }

9.6 A Simple Loopback

This example loops all input messages on the first input port to the first output port - including system exclusive messages.

var midi = null; // global MIDIAccess object var output = null; function echoMIDIMessage( event ) { if (output) { output.send( event.data, event.timestamp ); } } function onMIDISuccess( midiAccess ) { console.log( "MIDI ready!" ); var input = midiAccess.inputs.entries.next(); if (input) input.onmidimessage = echoMIDIMessage; output = midiAccess.outputs.values().next().value; if (!input || !output) console.log("Uh oh! Couldn't get i/o ports."); } function onMIDIFailure(msg) { console.log( "Failed to get MIDI access - " + msg ); } navigator.requestMIDIAccess().then( onMIDISuccess, onMIDIFailure );

9.7 A Simple Monophonic Sine Wave MIDI Synthesizer

This example listens to all input messages from all available input ports, and uses note messages to drive the envelope and frequency on a monophonic sine wave oscillator, creating a very simple synthesizer, using the Web Audio API. Note on and note off messages are supported, but sustain pedal, velocity and pitch bend are not. This sample is also hosted on webaudiodemos.appspot.com.

var context=null; // the Web Audio "context" object var midiAccess=null; // the MIDIAccess object. var oscillator=null; // the single oscillator var envelope=null; // the envelope for the single oscillator var attack=0.05; // attack speed var release=0.05; // release speed var portamento=0.05; // portamento/glide speed var activeNotes = []; // the stack of actively-pressed keys window.addEventListener('load', function() { // patch up prefixes window.AudioContext=window.AudioContext||window.webkitAudioContext; context = new AudioContext(); if (navigator.requestMIDIAccess) navigator.requestMIDIAccess().then( onMIDIInit, onMIDIReject ); else alert("No MIDI support present in your browser. You're gonna have a bad time.") // set up the basic oscillator chain, muted to begin with. oscillator = context.createOscillator(); oscillator.frequency.setValueAtTime(110, 0); envelope = context.createGain(); oscillator.connect(envelope); envelope.connect(context.destination); envelope.gain.value = 0.0; // Mute the sound oscillator.start(0); // Go ahead and start up the oscillator } ); function onMIDIInit(midi) { midiAccess = midi; var haveAtLeastOneDevice=false; var inputs=midiAccess.inputs.values(); for ( var input = inputs.next(); input && !input.done; input = inputs.next()) { input.value.onmidimessage = MIDIMessageEventHandler; haveAtLeastOneDevice = true; } if (!haveAtLeastOneDevice) alert("No MIDI input devices present. You're gonna have a bad time."); } function onMIDIReject(err) { alert("The MIDI system failed to start. You're gonna have a bad time."); } function MIDIMessageEventHandler(event) { // Mask off the lower nibble (MIDI channel, which we don't care about) switch (event.data[0] & 0xf0) { case 0x90: if (event.data[2]!=0) { // if velocity != 0, this is a note-on message noteOn(event.data[1]); return; } // if velocity == 0, fall thru: it's a note-off. MIDI's weird, y'all. case 0x80: noteOff(event.data[1]); return; } } function frequencyFromNoteNumber( note ) { return 440 * Math.pow(2,(note-69)/12); } function noteOn(noteNumber) { activeNotes.push( noteNumber ); oscillator.frequency.cancelScheduledValues(0); oscillator.frequency.setTargetAtTime( frequencyFromNoteNumber(noteNumber), 0, portamento ); envelope.gain.cancelScheduledValues(0); envelope.gain.setTargetAtTime(1.0, 0, attack); } function noteOff(noteNumber) { var position = activeNotes.indexOf(noteNumber); if (position!=-1) { activeNotes.splice(position,1); } if (activeNotes.length==0) { // shut off the envelope envelope.gain.cancelScheduledValues(0); envelope.gain.setTargetAtTime(0.0, 0, release ); } else { oscillator.frequency.cancelScheduledValues(0); oscillator.frequency.setTargetAtTime( frequencyFromNoteNumber(activeNotes[activeNotes.length-1]), 0, portamento ); } }

10. Security and Privacy Considerations of MIDI

There are two primary security and privacy concerns with adding the Web MIDI API to the web platform:

  1. Allowing the enumeration of the user's MIDI interfaces is a potential target for fingerprinting (that is, uniquely identifying a user by the specific MIDI interfaces they have connected). Note that in this context, what can be enumerated is the MIDI interfaces - not, for example, an individual sampler or synthesizer plugged into a MIDI interface, as these would not be enumerated, unless those devices are connected to the host computer with USB (USB-MIDI devices typically have their own MIDI interface, and would be enumerated). The interfaces that could be fingerprinted are equivalent to MIDI "ports", and for each device the API will expose the name of the device, manufacturer, and opaque identifier of the MIDI interface (but not any attached devices).

    Few systems will have significant numbers of MIDI devices attached; those systems that do will typically use hardware MIDI interfaces, not fanning out a dozen USB-MIDI connections through USB hubs. In this case, of course, enumerating the MIDI “devices” will only see the hardware MIDI interface(s), not the synthesizers, samplers, etc. plugged into it on the other side. Given the few number of devices plugged in, the amount of information exposed here is fairly symmetric with the fingerprinting concern exposed by other APIs such as the Gamepad API. The vast majority of systems have relatively few MIDI interfaces attached.

  2. Separate from the fingerprinting concerns of identifying the available ports are concerns around sending and receiving MIDI messages. Those issues are explored in more depth below.

In brief, the general categories of things you can do with MIDI ports are:

  1. Sending short messages (all messages except SysEx)
  2. Receiving short messages (all messages except SysEx)
  3. Sending SysEx messages. SysEx messages include both commonly recognized MIDI Time Code and MIDI Sample Dump Standard, as well as device-specific messages (like “patch control data for a Roland Jupiter-80 synthesizer”) that do not apply to other devices.
  4. Receiving SysEx messages.

The impact of each of these is:

  1. Sending short messages: sending note-on/note-off/controller messages would let you cause sounds to be played by attached devices, including (on Mac and Windows) any default virtual synthesizers. This by itself does not cause any concerning exposure - you can already make sounds without interaction, through <audio>, Flash, or Web Audio. Some attached devices might be professional lighting control systems, so it’s possible you could control stage lighting; however, this is extremely rare, and no known system has the ability to cause lasting damage or information leakage based solely on short messages; at worst, a malicious page could flash lights, and the user could close the page and reset their lighting controller.
  2. Receiving short messages: receiving note-on/note-off/controller messages would not cause any information exposure or security issues, as there is no identifying data being received, just a stream of controller messages - all of which must be initiated by the user on that MIDI device (except clock-type messages). This is very analogous to receiving keyboard or mouse events.
  3. Sending and Receiving SysEx. This is the biggest concern, because it would be possible to write code that looked for system-specific responses to sysex messages, which could identify the hardware available, and then use it to download data - e.g. samples stored in a sampler - or replace that data (erasing sample data or patches in the device), although both these scenarios would have to be coded for a particular device. It is also possible that some samplers might enable a system exclusive message to start recording a sample - so if the sampler happened to have a dedicated microphone attached (uncommon in practice, but possible), it would be possible to write code specific to a particular device that could record a short sample of sound and then upload it to the network without further user intervention. (You could not stream audio from the device, and most samplers have fairly limited memory, and MIDI Sample Dump sysex is a slow way to transfer data - it has to transcode into 7-bit - so it’s unlikely you could listen in for long periods.) More explicit fingerprinting is a concern, as the patch information/stored samples/user configuration could uniquely identify the system (although again, this requires much device-specific code; there is not standardized “grab all patches and hash it” capability.) This does suggest that system exclusive messages are in a security category of their own.

It's also useful to examine what scenarios are enabled by MIDI, mapped against these features:

  1. Receiving short messages. This is the most attractive scenario for Web MIDI, as it enables getting input from keyboards, drum pads, guitars, wind controllers, DJ/controllerist controllers, and more, and use those messages as input to control instruments and features in the Web Audio API as well as other control scenarios (MIDI is the protocol of choice for the multi-billion-dollar music production industry for getting physical controllers like knobs and buttons attached to your computer, both in pro/prosumer audio and media applications as well as consumer applications like Garageband.)
  2. Sending short messages - it’s tempting to say sending is significantly less interesting, as the scenario of attached output devices like hardware synthesizers is less common in today's market. The major exception to this is that many of the MIDI controllers have external host control of their indicator lights, and this makes them dramatically more useful. For example, the very popular Novation Launchpad controller uses MIDI note on/off messages sent to it to turn on/off and change colors of the buttons. The same is true of nearly all DJ controllers.
  3. Sending and receiving SysEx - obviously, for more advanced communication with high-end hardware devices, SysEx is required. Unfortunately, some common MIDI commands are also sent as system exclusive messages (MIDI Machine Control, for example - generic start/stop/rew/ffw commands) - and many devices use system exclusive to program patches, send advanced controller messages, download firmware, etc., which are much-demanded scenarios for Web MIDI. Some devices use sysex as a direct control protocol, as they can pack more data into a single “message”, and most devices use SysEx as way to save and restore patches and configuration information on less-expensive computer storage. Several of the major music hardware producers have expressed strong interest in using Web MIDI to provide web-based configuration and programming interfaces to their hardware. In short, disabling sysex altogether does not only disable high-end scenarios.

In short: the additional fingerprinting exposure of enumerating MIDI devices is directly analogous to the Gamepad API’s additional fingerprinting exposure through gamepad enumeration; typical users will only have at most a few devices connected, their configuration may change, and the information exposed is about the interface itself (i.e., no user-configured data).

The additional security concern for receiving short messages is also small - it’s analogous to listening to keyboard, mouse, mobile/laptop accelerometer, touch input or gamepad events; there is no additional information exposed, and all messages other than clock signals must be initiated by the user.

The additional concerns about sending short messages are analogous to any audio output - you cannot overwrite user information or expose use information, but you can make sounds happen, change patches, or (in rare configurations) toggle lights - but non-destructively, and not persistently.

System Exclusive, on the other hand, has a much less bounded potential, and it seems that distinguishing requests for SysEx separately in the API is a good idea, in order to more carefully provide user security hooks. The suggested security model explicitly allows user agents to require the user's approval before giving access to MIDI devices, although it is not currently required to prompt the user for this approval - but it also detailed that system exclusive support must be requested as part of that request.

A. References

A.1 Normative references

[DOM-LEVEL-3-CORE]
Arnaud Le Hors; Philippe Le Hégaret; Lauren Wood; Gavin Nicol; Jonathan Robie; Mike Champion; Steven B Byrne et al. Document Object Model (DOM) Level 3 Core Specification. 7 April 2004. W3C Recommendation. URL: http://www.w3.org/TR/DOM-Level-3-Core/
[DOM4]
Anne van Kesteren; Aryeh Gregor; Ms2ger; Alex Russell; Robin Berjon. W3C DOM4. 10 July 2014. W3C Last Call Working Draft. URL: http://www.w3.org/TR/dom/
[HIGHRES-TIME]
Jatinder Mann. High Resolution Time Specification. 18 October 2012. W3C Editor's Draft. URL: http://dvcs.w3.org/hg/webperf/raw-file/tip/specs/HighResolutionTime/Overview.html
[HTML5]
Ian Hickson; Robin Berjon; Steve Faulkner; Travis Leithead; Erika Doyle Navara; Edward O'Connor; Silvia Pfeiffer. HTML5. 28 October 2014. W3C Recommendation. URL: http://www.w3.org/TR/html5/
[MIDI]
Musical Instrument Digital Interface (MIDI) November 2001. MIDI Manufacturers Association. Complete MIDI 1.0 Detailed Specification ISBN 0-9728831-0-X URL: http://www.midi.org/techspecs/index.php
[RFC2119]
S. Bradner. Key words for use in RFCs to Indicate Requirement Levels. March 1997. Best Current Practice. URL: https://tools.ietf.org/html/rfc2119
[TYPED-ARRAYS]
David Herman; Kenneth Russell. Typed Array Specification. 26 June 2013. Khronos Working Draft. URL: https://www.khronos.org/registry/typedarray/specs/latest/
[WEBIDL]
Cameron McCormack. Web IDL. 19 April 2012. W3C Candidate Recommendation. URL: http://www.w3.org/TR/WebIDL/
[webaudio]
Paul Adenot; Chris Wilson; Chris Rogers. Web Audio API. 10 October 2013. W3C Working Draft. URL: http://www.w3.org/TR/webaudio/
Sours: https://www.w3.org/TR/webmidi/
  1. Velcrow vans
  2. Xda forum lg g3
  3. Pimple in ear meaning
  4. Thursday boot company
  5. Train running game

VirtualPiano.eu™

Virtual Pianois a small synthesizer / MIDI player library written for your Browser with GM like timbre map.
All timbres are generated by the combinations of Oscillator and Dynamically generated BufferSource algolithmically without any PCM samples.

  • Playable with mouse or qwerty-keyboard.
  • Play by MIDI keyboard also available via WebMIDI API(Chrome).
  • Selectable timbre with GM map. Ch10 is drum track.
  • Quality setting switches two timbre set. light-weighted 1 osc or FM based 2 or more osc.
  • VirtualPiano.eu also has built-in MIDI sequencer. Select local MIDI file by DnD or file selector to play.


The piano, a wonderful and fascinating string instrument, is the protagonist of countless compositions that have marked the history of music. His is a really interesting story, from an instrument for the few to an ambitious product in the modern world: we have to go back to the dawn of the 18th century to see the birth of the ancestor of the pianos as we know it today. Born in Italy, remodeled several times in Germany, implemented industrially in the United States and now present in musical styles around the world, the piano is rich in history and features. Previously reserved for the elite and the upper class, this tool democratized in the second half of the 20th century.
The piano emerges from the evolution of the clavichord and harpsichord, beaten string instruments. B. Cristofori (1655-1731) develops the idea that the keys on which to press would operate hammers capable of striking the strings. Little by little, the process evolves and an Alsatian family of German origin improved the instrument by modifying its hammer: the Silbermann family.
The first major industrial manufacturer of pianos is German and is called Blüthner. Pianos become more aesthetic, more robust, more powerful. The brand helps to give the piano its status as a prestigious instrument and to become part of the history of this instrument.
It is only around 1880-1890 that the piano as we know it today will take its shape.
At the dawn of the twentieth century, the skin that wrapped the heads of the hammers was replaced by sheep's wool, which thus embellished the sound of the piano keys. The era of digital and web 2.0 does not contrast the success and longevity of this "noble" instrument: today the electronic piano offers the same sounds as an upright or grand piano, but it is mobile, light, compact and removable.

Sours: https://virtualpiano.eu/
Making Music with the Web Platform

.

Midi keyboard web

.

Making Music with the Web Platform

.

Now discussing:

.



40 41 42 43 44