(19)
(11) EP 3 361 353 A1

(12) EUROPEAN PATENT APPLICATION

(43) Date of publication:
15.08.2018 Bulletin 2018/33

(21) Application number: 17382060.6

(22) Date of filing: 08.02.2017
(51) International Patent Classification (IPC): 
G06F 3/01(2006.01)
G10H 1/00(2006.01)
G10H 1/46(2006.01)
G10H 5/00(2006.01)
G10H 1/02(2006.01)
(84) Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR
Designated Extension States:
BA ME
Designated Validation States:
MA MD

(71) Applicant: OVALSOUND, S.L.
08005 Barcelona (ES)

(72) Inventors:
  • Posada Entrecanales, Alejandro
    17462 BORDILS (ES)
  • Clouth, Robert
    08001 BARCELONA (ES)
  • Leonardo Perotti, Enrique
    08440 CARDEDEU (ES)
  • De Heras Gomez, Miguel Angel
    08018 BARCELONA (ES)

(74) Representative: ZBM Patents - Zea, Barlocci & Markvardsen 
Plaza Catalunya, 1 2nd floor
08002 Barcelona
08002 Barcelona (ES)

   


(54) GESTURE INTERFACE AND PARAMETERS MAPPING FOR BOWSTRING MUSIC INSTRUMENT PHYSICAL MODEL


(57) Method of synthesizing sounds and multisensory sound system are disclosed. In example configurations, first and second analog signals are identified by pressure and a 3D tracking sensors, respectively. Then, first and second digital signals are generated and transmitted to a synthesizer. The synthesizer identifies first and second sets of sound parameters associated with the first and second digital input signals and generates synthesized sounds as a function of the identified parameters.


Description

BACKGROUND



[0001] Any object that produces sound can be considered a musical instrument-it is through purpose that the object becomes a musical instrument. The process of converting a bodily action into a sound through an object or instrument is sometimes direct, for example using an idiophone, or indirect, for example using an electromechanical instrument.

[0002] Electromechanical instruments typically comprise an electronic or digital sound module which produces a synthesized or sampled sound and one or more electric sensors to trigger the sounds.

[0003] However, when various sensors are involved the synthesized sound needs to synchronize the signals received from the sensors in order to avoid lag between the bodily action that triggers the sensor and the sound generation.

SUMMARY



[0004] In a first aspect, a method of synthesizing sounds in a multisensory sound system is disclosed. The method comprises identifying first analog input signals at a pressure sensor and respective second analog input signals at a 3D tracking sensor in response to one or more user gestures; generating first digital input signals by sampling the first analog input signals; transmitting the first digital input signals to a synthesizer; identifying at least a first set of sound parameters at the synthesizer associated with the first digital input signals; generating second digital input signals by processing the second analog input signals; transmitting the second digital input signals to the synthesizer, wherein the second digital input signals are received by the synthesizer with a delay with respect to the respective first digital input signals; identifying at least a second set of sound parameters at the synthesizer associated with the second digital input signals; Summing the identified first and second sets of parameters to generate a synthesized sound as a function of the identified parameters.

[0005] As the intention of the user, e.g. the musician, may be to generate single sounds with single gestures or bodily actions, by selecting different sound parameters in response to the signals received by different sensors, a smooth transition effect may be produced and the musician may thus not perceive any lag or delay between the bodily action or gesture and the generation of the sound even if the respective signals arrive at the synthesizer at different times.

[0006] The perception of a lagging sound, i.e. a sound that may arrive with delay due to latency of one or more of the processing elements, is generally subjective. Trained musicians may perceive a delay where an untrained ear may not. However, there are various thresholds, in the order of 10 milliseconds, where even the more trained ear may not perceive any lag. Thus, if the first digital signal arrives at the synthesizer within such a threshold, whereas the second digital signal arrives beyond such a threshold, it is possible that the trained musician may perceive the delay and consider the sound system as defective. A stream of signals from the pressure sensor may thus combine with a stream of signals from the 3D tracking sensor and generate a continuous stream of sounds and sound effects through the selection of different sound parameters and, therefore, no lagging or delay may be perceived by the trained ear.

[0007] In some examples, identifying first analog input signals may comprise pressing a pad attached to the pressure sensor. Such pressing may trigger the pressure sensor and generate the first digital signals. The pad may have various pressure points and the pressure sensor may detect pressure at any pressure point. Furthermore, the pressure sensor may detect any pressure above a predetermined threshold and sample the received pressure signal to generate the first digital input signal associated with the pressure intensity. During a pressure event, i.e. when pressure is above the predetermined threshold, the pressure sensor values may range between the predetermined threshold and a peak value. The synthesizer may then receive peak values and select sound parameters that may correspond to a musical note. During the rest of the pressure event, the synthesizer may receive envelope values and select sound parameters associated to the envelope values received, associated with the musical note, e.g. a pitch shifting effect. If the user maintains pressure after the peak value, the synthesizer may maintain the effect, i.e. maintain the activation of the pressure effect, until pressure is below the predetermined threshold. The synthesizer may then trigger or emit a sound based on the selected sound parameters.

[0008] In some examples, identifying a second analog input signal may comprise identifying a movement above the 3D tracking sensor. The 3D tracking sensor may comprise a series of antennas distributed around the pad. The antennas may define zones and changes in the electromagnetic field of the zones may indicate movement in one or more of the defined zones. For example, successive excitation of two consecutive zones may indicate distance, movement along one direction and/or speed.

[0009] In some examples, identifying a movement may comprise measuring a distance of a moving element from the 3D tracking sensor. For example, the movement may be identified as a hovering movement when the distance is greater than a predetermined threshold. The predetermined threshold may substantially coincide with the distance of a pad surface (e.g. the upper or external surface of the pad) from the 3D tracking sensor that may lie below the pad.

[0010] When the distance is substantially equal to the predetermined threshold, the predetermined threshold substantially coinciding with the distance of a pad surface from the 3D tracking sensor, the movement may be identified as a stroking movement.

[0011] In some examples, generating a first synthesized sound may comprise combining the selected first and second sound parameters.

[0012] In some examples, the method may further comprise receiving a plurality of first and second analog input signals at a plurality of pressure and 3D tracking sensors. A complex synthesized sound may then be generated and the complex synthesized sound may be mixed or processed at an output.

[0013] In another aspect a sound system is disclosed. The sound system may comprise pressure sensors to receive first analog input signals. The sound system may further comprise 3D tracking sensors to receive second analog input signals. The sound system may further comprise interaction pads arranged outwards from (e.g. above) the pressure and 3D tracking sensors, respectively. A controller may receive first analog input signals from the pressure sensors and second analog input signals from the 3D tracking sensors. A communications interface may receive first and second digital input signals and transmit them to the synthesizer. The synthesizer may be configured select first set of sound parameters and receive the second digital input signals and select second set of sound parameters, whereby the synthesizer is configured to generate a synthesized sound as a function of the selected first and second sets of sound parameters. The sound system may comprise an instrument module and the synthesizer may be internal or external to the instrument module. The communication interface may be wired or wireless.

[0014] In some examples, the instrument module may comprise an ovoid case, the ovoid case may comprise an upper hemisphere with apertures that may host the interaction pads. The interaction pads may be radially distributed around the center of the hemisphere. Each interaction pad may be coupled to a pressure and a 3D tracking sensor, respectively.

[0015] In some examples, each aperture may host a sensor base. The respective pressure sensor and 3D tracking sensor may be coupled to the sensor base. The sensor base may comprise support elements and the interaction pads may be mechanically attached to the support elements. The support elements may have resilient elements distributed below the respective interaction pad. The interaction pad may comprise an external surface. The support elements may be configured to maintain the external surface at a predetermined distance or distance range from the sensor base. Thus predetermined distances or distance ranges may be defined to help identify gestures as hovering movements or as stroking movements.

[0016] In some examples the sound system may further comprise light sources, such as LEDs, to indicate interaction with the sound system.

[0017] In another aspect, a computer program product is disclosed. The computer program product may comprise program instructions for causing a computing system to perform a method of synthesizing sounds in a multisensory sound system according to some examples disclosed herein.

[0018] The computer program product may be embodied on a storage medium (for example, a CD-ROM, a DVD, a USB drive, on a computer memory or on a read-only memory) or carried on a carrier signal (for example, on an electrical or optical carrier signal).

[0019] The computer program may be in the form of source code, object code, a code intermediate source and object code such as in partially compiled form, or in any other form suitable for use in the implementation of the processes. The carrier may be any entity or device capable of carrying the computer program.

[0020] For example, the carrier may comprise a storage medium, such as a ROM, for example a CD ROM or a semiconductor ROM, or a magnetic recording medium, for example a hard disk. Further, the carrier may be a transmissible carrier such as an electrical or optical signal, which may be conveyed via electrical or optical cable or by radio or other means.

[0021] When the computer program is embodied in a signal that may be conveyed directly by a cable or other device or means, the carrier may be constituted by such cable or other device or means.

[0022] Alternatively, the carrier may be an integrated circuit in which the computer program is embedded, the integrated circuit being adapted for performing, or for use in the performance of, the relevant methods.

BRIEF DESCRIPTION OF THE DRAWINGS



[0023] Non-limiting examples of the present disclosure will be described in the following, with reference to the appended drawings, in which:

Fig. 1 is a block diagram of a multisensory sound system (MSS);

Fig. 2 schematically illustrates an multisensory sound system with a hang-type instrument unit;

Fig. 3 is a flow diagram of a method of synthesizing sounds in a multisensory sound system;

Fig. 4 is an exploded view of a multisensory sound system instrument according to an example;

Fig. 5 is an exploded view of an interaction pad according to an example.


DETAILED DESCRIPTION OF EXAMPLES



[0024] Fig. 1 is a block diagram of a multisensory sound system (MSS). MSS 100 may comprise an instrument unit 105, a sound synthesizer 130 and a reproduction unit 140. The instrument unit may comprise interaction pads 110. Although two such pads are shown, the skilled person may appreciate that the instrument unit may comprise any number of pads. Each interaction pad 110 may comprise a 3D tracking sensor 112, a pressure sensor 114 and an LED unit 116. The LED unit may be connected to LED controller 118. The instrument unit 105 may further comprise a data and power bus 120. The instrument unit 105 may further comprise a central controller 125. The central controller 125 may comprise a processing unit 127 and a communication module 129. The communication module 129 may transmit digital signals to the sound synthesizer 130. The sound synthesizer 130 may receive the digital signals and generate sound signals that may be reproduced at the reproduction unit 140.

[0025] Fig. 2 schematically illustrates an MSS with a hang-type instrument unit. The instrument unit 200 is an electronical device with an ovoid geometry. On the upper hemisphere there may be interaction pads 205 with oval form, on a radial distribution, and a circular area of interaction 210 at the center. On the lower hemisphere there may be connections to plug in and out the device, and a rubber piece to stabilize the product.

[0026] The instrument unit may comprise an ovoid housing with various oval holes on the upper hemisphere and one central circular hole at the north pole. Support elements may be placed under the holes and the interaction pads 205 may be mechanically attached to the support elements. The support elements may have soft resilient elements distributed at their periphery and one central resilient element.

[0027] All the different areas of interaction may be provided with the same functionalities. Two different sensors may control the pressure over the pad, the position of the hand respect the center of area, 3D gestures and the 3D position from the surface of the pad to several centimeters of it. Also, an RGB LED may give instant feedback to the user 201 during interaction. In the interaction pads the LEDs may be integrated on the little groove in the upper part. In the circular area at the top, seven LEDs may be integrated around the periphery.

[0028] The interaction pads may have a cover and a sensor base. The sensor base may have a pressure sensor, and a 3D tracking sensor arranged around the periphery of the pad. A touch controller on the sensor base may receive signals from the pressure sensors and process all the information through a microprocessor, e.g. an ARM Cortex microprocessor, with embedded firmware.

[0029] The interaction pads may have one or more LEDs attached to the cover and connected through the sensor base. The LEDs may be remotely controlled by a software or by an audio engine that may be integrated in the instrument unit or that may be external to the instrument unit.

[0030] Each interaction pad may comprise resilient pieces distributed under the pad surface that may control the stability and allow the movement of the component. Four of them may be distributed at the periphery, working as a compression spring, and one may be in the center, to actuate on the pressure sensor.

[0031] A screw, e.g. a Polyamide screw, may fix the button to the case, allowing the movement of the system, but preventing the set to pull apart.

[0032] The same configuration may be adopted in all or part of the interaction pads, generating the same characteristics and feelings at all points of interaction.

[0033] On the inside a central electronic board may be suspended, separated by four plastic fasteners.

[0034] In the middle of the lower hemisphere, a metal piece may be holding the different connectors. A circular rubbery piece may be all around the base to ensure grip and stability of the system over a surface. This same part may be responsible for facilitating the passage of cables, without unbalancing the instrument.

[0035] The MSS may further comprise a synthesizer 215 and a sound reproduction unit 220. The reproduction unit may comprise or may be connected to a speaker or loudspeaker 225.

[0036] Fig. 3 is a flow diagram of a method of synthesizing sounds in a multisensory sound system. In block 305, first analogue input signals may be received at a pressure sensor. They may correspond to a pressure or strike movement of the user to the pad hosting the pressure sensor. In block 310, second analogue input signals may be received at a 3D tracking sensor. The may correspond to changes in the electrical field measured by the 3D tracking sensor. In block 315 first digital input signals may be generated by sampling the first analogue input signals. The first digital input signals may comprise peak values of the analogue signal and/or the envelope of the signal. In some cases an analogue signal may be a simple strike with no remaining pressure. Thus, the first digital input signal may only comprise a peak value. In other cases the analogue signal may be a strike followed by varying levels of pressure on the pad and consequently on the pressure sensor. Then, the first digital input signals may comprise peak and envelope values. The peak value may comprise the velocity (intensity) of the signal. In other cases, no strike may take place and the user may gradually pressure the pad, and consequently the pressure sensor. When the pressure sensor detects a pressure above a predetermined threshold, it may start generating enevelope values. Thus the first digital input signal may comprise only envelope values. While the first analog input signals remain above a predetermined threshold the pressure sensor may perceive the first analog input signals received as part of a single pressure event. For example, the user may strike the pad and maintain pressure afterwards on the pad. Thus, the pressure sensor may measure pressure above the threshold for an extended period of time. During this time there may be various fluctuations of pressure that may correspond to movements of the user's hand around the pad. In block 320, the first digital input signals may be transmitted to a synthesizer. A set of first digital input signals may correspond to a pressure event. A pressure event may start when a sample of the analog input signal is measured above the threshold and may end when a sample of the analog input signal is below the threshold. All intermediate samples may be considered part of the pressure event. In block 325, the synthesizer may receive the peak and/or envelope values and a first set of sound parameters may be identified. The synthesizer may have a sound or a musical note associated to each pad. If the first digital input signal comprises peak values, then the synthesizer may emit the sound or musical note at the received velocity. If the first digital input signal comprises envelope information, then the synthesizer may select a preassigned parameter and apply the parameter to the associated sound or musical note. Example parameters may include one or a combination of the following: Volume, Pitch, Cutoff frequency, Resonance frequency, Attack, Decay, Sustain, Release, LFO (low frequency oscillator) or Sound Effects (e.g. delay, reverb). One skilled in the art may appreciate that any parameter included in a synthesizer may be assigned to the envelope. In block 330, a second digital input signal may be generated by processing the second analogue signal. The second digital input signal may comprise coordinate values, e.g. values related to the detection of the position of the user's hand in a two dimension (e.g. X,Y) representation. In block 335, the second digital input signal may be transmitted to the synthesizer. The second digital signal may be received by the synthesizer with a delay with respect to the first digital signal. The synthesizer may have preassigned parameters associated with X,Y position values. The parameter may be one of the parameters previously mentioned. Typically, the envelope values may have different preassigned parameters than the coordinate values. The synthesizer may then select, in block 340, the preassigned parameter for the coordinate values and apply the parameter to the associated sound or musical note. As the coordinates change, so may change the value of the associated parameter. Then in block 345, the first set of selected sound parameters and the second set of selected sound parameters may be summed, combined or aggregated to generate at block 350 the mixed sound, at any given moment and for any interaction pad, as a function of the selected parameters.

[0037] Fig. 4 is an exploded view of a multisensory sound system instrument according to an example. The instrument 400 may comprise an upper housing 401 and a lower housing 402. The upper housing may comprise openings, e.g. oval shaped openings, to host interaction pads. The lower housing may comprise a central opening. A base 403 may be hosted in the opening of the lower case 402. The base 403 may be resilient and/or have a shape to allow comfortable rest on a user's lap. The instrument 400 may also comprise interaction pads. Interaction pads may comprise pad supports 408, printed circuit board (PCB) pads 412, intermediate clips 415 and pad surface caps 407. The PCB pads 412 may be attached to the pad supports 408, respectively. The PCB pads may comprise pressure and 3D tracking sensors. The intermediate clips 415 may separate the PCB pads 412 from the pad surface caps 407. The pad surface caps 407 may clip on the intermediate clips 415 so that the pad surface caps 407, used for interaction of the user with the instrument, do not directly sit on the PCB pads 412. The instrument 400 may further comprise a central pad 427 that may have similar or distinct functionality to the other interaction pads. Finally, the instrument 400 may have a controller (not shown) hosted in the instrument and connected to the PCB pads and communication and power source module 440 that may provide connections, communication ports, power ports, batteries etc. The module 440 may be connected to the controller and to the PCB pads 412.

[0038] Fig. 5 is an exploded view of an interaction pad according to an example. Interaction pad 500 may comprise pad support 505, PCB pads 510, intermediate clips 515 and pad surface cap 520. The pad supports 505 may have an ovoid shape with raised periphery and may host resilient elements 506 and sensor actuator 507. The PCB pad 510 may comprise a pressure sensor 512 and a 3D and tracking sensor 514. The intermediate clip 515 and the pad surface cap 520 may have an aperture, respectively to host a LED light diffuser 517. The LED light diffuser 517 may be configured to contact LED light 516 when the interaction pad is mounted. The interaction pad 500 may further comprise a resilient pocket 518, attached to a protrusion of the PCB pad 510 to absorve vibrations when the interaction pad is struck or pressed. The various pieces of the interaction pad may be held together with screws 519.

[0039] Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the exemplary embodiments of the invention.

[0040] The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.

[0041] The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), flash memory, Read Only Memory (ROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.

[0042] In one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium.

[0043] Although only a number of examples have been disclosed herein, other alternatives, modifications, uses and/or equivalents thereof are possible. Furthermore, all possible combinations of the described examples are also covered. Thus, the scope of the present disclosure should not be limited by particular examples, but should be determined only by a fair reading of the claims that follow. If reference signs related to drawings are placed in parentheses in a claim, they are solely for attempting to increase the intelligibility of the claim, and shall not be construed as limiting the scope of the claim.

[0044] Further, although the examples described with reference to the drawings comprise computing apparatus/systems and processes performed in computing apparatus/systems, the invention also extends to computer programs, particularly computer programs on or in a carrier, adapted for putting the system into practice.


Claims

1. Method of synthesizing sounds in a multisensory sound system, comprising:

receiving first analog input signals at a pressure sensor and second analog input signals at a 3D tracking sensor in response to one or more user gestures;

generating first digital input signals by sampling the first analog input signals;

transmitting the first digital input signals to a synthesizer;

identifying at least a first set of sound parameters at the synthesizer associated with the first digital input signals;

generating second digital input signals by processing the second analog input signals;

transmitting the second digital input signals to the synthesizer, wherein the second digital input signals are transmitted to the synthesizer with a delay with respect to respective first digital input signals;

identifying at least a second set of sound parameters at the synthesizer associated with the second digital input signals;

summing the identified first and second sets of parameters to generate a synthesized sound as a function of the identified sets of parameters.


 
2. Method according to claim 1, wherein receiving first analog input signals comprises pressing a pad surface attached to the pressure sensor.
 
3. Method according to claim 1 or 2, wherein receiving second analog input signals comprises identifying a movement above the 3D tracking sensor.
 
4. Method according to any of previous claims, wherein identifying a movement comprises measuring changes in an electrical field caused by a moving element at a distance from the 3D tracking sensor.
 
5. Method according to claim 4, wherein identifying a movement further comprises identifying a hovering movement when the changes to the electrical field correspond to a distance greater than a predetermined threshold, the predetermined threshold substantially coinciding with the distance of a pad surface from the 3D tracking sensor.
 
6. Method according to claim 4, wherein identifying a movement further comprises identifying a stroking movement when the changes to the electrical field correspond to a distance substantially equal to a predetermined threshold, the predetermined threshold substantially coinciding with the distance of a pad surface from the 3D tracking sensor.
 
7. Method according to any of previous claims, wherein selecting at least a first set of sound parameters comprises e selecting a note sound when at least one of the first digital input signals comprises a peak signal value or when the first digital input signal comprises an envelope signal value that exceeds a predetermined threshold.
 
8. Method according to claim 7, wherein selecting at least a first set of sound parameters further comprises selecting a first preassigned synthesizer parameter when the first digital input signals comprise envelope signal values.
 
9. Method according to any of previous claims, wherein selecting at least a second set of sound parameters comprises selecting a second preassigned synthesizer parameter when the second digital input signals comprise coordinate values.
 
10. Method according to any of previous claims, wherein generating a synthesized sound comprises combining the selected first and second sets of sound parameters.
 
11. Method according to any of previous claims, further comprising receiving a plurality of first and second analog input signals at a plurality of pressure and 3D tracking sensors and generating a plurality of synthesized sounds and mixing the plurality of synthesized sounds at an output.
 
12. A multisensory sound system, comprising:

pressure sensors to receive first analog input signals,

3D tracking sensors to receive second analog input signals,

interaction pads arranged above the pressure and 3D tracking sensors, respectively,

a controller to receive first sensor signals from the pressure sensors and second sensor signal from the 3D tracking sensors and generate first and second digital input signals, respectively;

a synthesizer;

a communications interface to receive first and second digital input signals and transmit them to the synthesizer.

wherein the synthesizer is configured to receive the first digital input signals and select first set of sound parameters and receive the second digital input signals and select second set of sound parameters, whereby the synthesizer is configured to generate a synthesized sound as a function of the selected first and second sets of sound parameters.


 
13. The multisensory sound system according to claim 12, further comprising an ovoid case, the ovoid case comprising un upper hemisphere with apertures that host the interaction pads, the interaction pads radially distributed around the center of the hemisphere, wherein each interaction pad is coupled to a pressure and a 3D tracking sensor, respectively.
 
14. The multisensory sound system according to claim 13, wherein each aperture hosts a sensor base, wherein the respective pressure sensor and 3D tracking sensor is coupled to the sensor base, wherein the sensor base comprises support elements and the interaction pads are mechanically attached to the support elements, wherein the support elements have resilient elements distributed below the respective interaction pad.
 
15. The multisensory sound system according to claim 14, wherein the interaction pad comprises an external surface, wherein the support elements are configured to maintain the external surface at a predetermined distance or distance range from the sensor base.
 




Drawing



















Search report















Search report