(19)
(11)EP 3 254 385 B1

(12)EUROPEAN PATENT SPECIFICATION

(45)Mention of the grant of the patent:
29.04.2020 Bulletin 2020/18

(21)Application number: 16814964.9

(22)Date of filing:  25.05.2016
(51)International Patent Classification (IPC): 
G10H 1/00(2006.01)
H04B 11/00(2006.01)
G10H 1/38(2006.01)
G10H 1/26(2006.01)
(86)International application number:
PCT/US2016/034134
(87)International publication number:
WO 2016/209510 (29.12.2016 Gazette  2016/52)

(54)

COMMUNICATING DATA WITH AUDIBLE HARMONIES

ÜBERTRAGUNG VON DATEN MIT HÖRBAREN HARMONIEN

COMMUNICATION DE DONNÉES À HARMONIES AUDIBLES


(84)Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

(30)Priority: 24.06.2015 US 201514748680

(43)Date of publication of application:
13.12.2017 Bulletin 2017/50

(73)Proprietor: Google LLC
Mountain View, CA 94043 (US)

(72)Inventors:
  • SMUS, Boris
    Mountain View, California 94043 (US)
  • GETREUER, Pascal Tom
    Mountain View, California 94043 (US)

(74)Representative: Anderson, Oliver Ben et al
Venner Shipley LLP 200 Aldersgate
London EC1A 4HD
London EC1A 4HD (GB)


(56)References cited: : 
EP-A1- 1 457 971
JP-A- S6 257 325
KR-A- 20020 048 314
US-A- 5 915 237
US-A1- 2014 200 882
US-B2- 7 349 481
EP-A2- 1 161 075
JP-A- 2012 015 853
KR-A- 20070 029 987
US-A1- 2013 058 507
US-B2- 7 069 211
  
      
    Note: Within nine months from the publication of the mention of the grant of the European patent, any person may give notice to the European Patent Office of opposition to the European patent granted. Notice of opposition shall be filed in a written reasoned statement. It shall not be deemed to have been filed until the opposition fee has been paid. (Art. 99(1) European Patent Convention).


    Description

    TECHNICAL FIELD



    [0001] This disclosure generally relates to performing data transfers using audio signals, and specifically to using audio signals made up of tones chosen according to a musical relationship.

    BACKGROUND



    [0002] Portable devices generally include a microphone or other device for receiving audible input (e.g., from the user), and speakers for producing audible output. Some of these portable devices also allow users to communicate with other users using various mechanisms, such as Short Message Service (SMS) text messages, email, and instant messages. Such communications, generally, utilize a wide area network, such as a cellular network, or a local area network, such as a WiFi or Bluetooth network.

    [0003] EP1457971 discloses an information transmission system capable of transmitting target information via voice, as well as an information encoding apparatus and an information decoding apparatus for use with the system. The information encoding apparatus (31) converts input text information to an intermediate code in accordance with a predetermined encoding method, and outputs a voice derived from voice information based on the intermediate code and supplemented with music arrangement information. The voice is transmitted either directly or via a broadcasting or communicating medium to a receiving side. The information decoding apparatus (34) on the receiving side receives the generated voice, recognizes a voice waveform from the received voice, and reproduces the original target information by decoding the intermediate code based on the recognized voice waveform. During the encoding, the intermediate code is assigned to at least one element of the voice, and the music arrangement information is used as a basis for determining at least one other element of the voice.

    SUMMARY



    [0004] In general, an aspect of the subject matter described in this specification may involve a process for communicating data with audible harmonies. This process may allow for localized data transfers between devices without the use of a local or wide area network, such as a Wifi or cellular network. As such, localized data transfers may be performed in areas in which such networks are unavailable. In addition, the localized data transfers may be performed using relatively simple devices (having only a speaker and/or a microphone) without the need for the devices to include specific components (e.g. chipsets, antenna etc.) for communication via short-range radio frequency protocols such as Bluetooth. It will thus be appreciated that the data transfer process described herein may be more widely usable than are radio frequency communication protocols which require network infrastructure and/or specific short-range RF communication components.

    [0005] Another benefit of the process is that listeners may find the sound-based communications between the devices pleasant. This may reduce the chance of the users of the devices or passers-by causing or requesting interruption of the communication of data due to the unpleasantness of the sounds being produced.

    [0006] For situations in which the systems discussed here collect personal information about users, or may make use of personal information, the users may be provided with an opportunity to control whether programs or features collect personal information, e.g., information about a user's social network, social actions or activities, profession, a user's preferences, or a user's current location, or to control whether and/or how to receive content from the content server that may be more relevant to the user. In addition, certain data may be anonymized in one or more ways before it is stored or used, so that personally identifiable information is removed. For example, a user's identity may be anonymized so that no personally identifying information can be determined for the user, or a user's geographic location may be generalized where location information is obtained, such as to a city, zip code, or state level, so that a particular location of a user cannot be determined. Thus, the user may have control over how information is collected about him or her and used by a content server.

    [0007] The scope of protection is defined by the claims.

    [0008] In some aspects, the subject matter described in this specification may be embodied in methods that may include the actions of determining a set of audio attribute values to be modulated to transfer a data set between devices, wherein the set of audio attributes values are selected based on a musical relationship between the audio attribute values, determining a symbol map associating each possible data value for the data set with an ordered sequence of audio attribute values from the set of audio attribute values, and sending the data set to one or more receiving devices The action of sending the data to one or more receiving devices may include, for each data value in the data set, determining the ordered sequence of audio attribute values associated with the data value from the symbol map and playing an ordered sequence of sounds representing the data value, each sound having an audio attribute value in the determined ordered sequence of audio attribute values.

    [0009] Other implementations of this and other aspects include corresponding systems, apparatus, and computer programs, configured to perform the actions of the methods, encoded on computer storage devices. A system of one or more computers can be so configured by virtue of software, firmware, hardware, or a combination of them installed on the system that in operation cause the system to perform the actions. One or more computer programs can be so configured by virtue of having instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.

    [0010] These other versions may each optionally include one or more of the following features. For instance, implementations may include the set of audio attributes values being pitch values and the musical relationship being a chordal relationship between the pitches. In these implementations, the chordal relationship may be a major chord relationship, a minor chord relationship, a major seventh chord relationship, a minor seventh chord relationship, an augmented chord relationship, a diminished chord relationship, a suspended chord relationship, or combinations thereof.

    [0011] In one aspect, each of the one or more sounds in the ordered sequence includes a plurality of pitches played substantially simultaneously. In the present invention, the symbol map includes a chord progression including a plurality of sets of pitch values, and timing information indicating when each set of pitch values will be used to send associated data values. In these instances, each set of pitch values is selected based on a chord relationship between the pitch values in the set.

    [0012] In some alternative example implementations, the set of audio attributes may include duration values. In such implementations, the musical relationship may be a rhythmic relationship between the durations. In some examples, the set of audio attributes may include envelope shape parameter values.

    [0013] In some implementations, the symbol map may be stored by the one or more receiving devices before the data set is sent. In these implementations, the one or more receiving device may store a plurality of different symbol maps, and the action of sending the data set may include sending a header including an identifier of a particular symbol map to be used when transferring the data set.

    [0014] In some examples, sending the data set may include sending a header between the sending device and the receiving devices including the symbol map. Sending the header including the symbol map may, for instance, include playing an ordered sequence of sounds representing each data value in the symbol map based on a default symbol map.

    [0015] Other implementations of this and other aspects include corresponding systems, apparatus, and computer programs, configured to perform the actions of the methods, encoded on computer storage devices. A system of one or more computers can be so configured by virtue of software, firmware, hardware, or a combination of them installed on the system that in operation cause the system to perform the actions. One or more computer programs can be so configured by virtue of having instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.

    [0016] These other versions may each optionally include one or more of the following features. For instance, implementations may include identifying a symbol map associating each possible data value for the data set with an ordered sequence of audio attribute values from a set of audio attribute values selected based on a musical relationship between the audio attribute values, receiving a plurality of sounds from a sending device, identifying ordered sequences of the received sounds having audio attribute values associated with data values in the symbol map, and assembling the data values according to an order in which the identified sequences were received to form the data set.

    [0017] The details of one or more embodiments of the subject matter described in this specification and falling under the scope of the appended set of claims are set forth in the accompanying drawings and the description below. Other potential features, aspects, and advantages of the subject matter referring to feature combinations different from those defined by the appended set of claims, refer to examples which were originally filed but which do not represent embodiments of the presently claimed invention; these examples are still shown for illustrative purposes only, as will become apparent from the description, the drawings, and the claims.

    DESCRIPTION OF DRAWINGS



    [0018] 

    FIG. 1 illustrates an example of a system for communicating data between devices with audible harmonies.

    FIG. 2 is a conceptual diagram of an exemplary framework for providing a symbol map for use in communicating data between devices with audible harmonies in a system.

    FIGS. 3a and 3b illustrate exemplary systems for communicating data between devices with audible harmonies.

    FIGS. 4a and 4b are tables for exemplary mappings of symbols to distinct values.

    FIG. 5 illustrates an example of a sequence in which data is transferred with audible harmonies in a system.

    FIG. 6 is a conceptual diagram of an exemplary framework for decoding data from audible harmonies in a system.

    FIGS. 7 and 8 illustrate exemplary processes for communicating data using audible harmonies.

    FIG. 9 is a diagram of exemplary computing devices.



    [0019] Like reference symbols in the various drawings indicate like elements.

    DETAILED DESCRIPTION



    [0020] This application describes techniques for transferring data between devices using audible signals made up of musically-related tones. One example method selecting one or more ordered sequences of audio attribute values based on a musical relationship between the audio attribute values. Each sequence is associated with a particular data value, such that the ordered sequence may be played by a first device (e.g., using a speaker) to encode the data value. The sequence may be received by a second device (e.g., using a microphone), and decoded to produce the data value. Therefore, the techniques described herein may enable such devices to exchange data using sound-based communications that are melodic and pleasant to nearby listeners. The communication protocols described herein support sound-based communications in a wide variety of musical styles, which may provide for enhanced user customization and experience.

    [0021] The present techniques may offer several advantages over previous techniques. For example, the present techniques allow for localized data transfers between devices without the use of a local or wide area network, such as a Wifi or cellular network, and without the need for the devices to include specific components configured to allow short-range radio frequency communication. As such, the present techniques may be more widely usable than are current radio frequency communication techniques. In addition, local or wide area networks often charge fees for data transfers and, as such, the techniques described herein may allow users to perform data transfers to other users proximate to them without having to pay these charges. Further, because the frequencies of the tones used to represent various data values are chosen according to a musical relationship, the resulting audible signal may be pleasing to a user and others within earshot, as opposed to techniques in which audible signals may be perceived simply as noise or an unpleasant collection of unrelated tones. The techniques may also allow for variation of the musical relationship between the tones during a message, which may make the audible signal more pleasing and musical. Further, the dynamic nature of the communication structure provided by these techniques may allow users to create a "signature" sound by defining various signal attributes of messages. This dynamic nature and the ability to create "signature" sound may also reduce the likelihood of communication between one pair of devices interfering with communication between another pair of devices in the vicinity.

    [0022] FIG. 1 illustrates an example of a system 100 for communicating data between devices using audible harmonies. The system 100 includes a first client device 102 and a second client device 104. The first and second client devices 102 and 104 may include mobile computing devices (e.g., cellular phones, tablets, etc.), laptop computers, desktop computers, and other computing devices.

    [0023] In operation, the first client device 102 transmits a data set to the second client device 104 by playing an audio signal 110 that is encoded with the data set through a speaker included in the first client device 102. The second client device 104 captures the audio signal 110 using a microphone included in the second client device 104, and decodes the data set from the captured signal. The audio signal 110 includes multiple symbols each made up of one or more musical tones. Each symbol represents a particular data value, allowing a data set to be represented by a series of one or more musical notes or chords containing multiple notes. The pitches of the notes used in the symbols may be chosen according to a musical or harmonic relationship to one another, so that the resulting audio signal may be perceived by a user as musical.

    [0024] For example, a user of the first client device 102 may want to share a website with a user of the second client device 104. The user of the first client device 102 may interact with an application running on the first client device 102 to send the URL for the website to the user of the second client device 104 as an auditory message in the musical style of a basic rock song (e.g., a I, IV, V7, I chord progression). The first client device 102 may transmit the URL to the second client device 104 by playing audio signal 110 that is encoded with a data set that indicates the URL for the website. The second client device 104 captures the audio signal 110, extracts the URL, and provides the user of the second client device 104 with access to the corresponding website.

    [0025] Audio signal 110 may include a melody 120 in a particular musical key (e.g., C major). In some examples, the melody 120 may resemble a popular song, such as by following a chord progression used by the song. One or more concurrent tones or series of single tones included in the melody may be used as symbols to represent data values in the data set to be transferred. The attributes of melody 120, e.g., the particular pitch, duration, and order of the sequence of symbols, may be representative of data set 130.

    [0026] When generating the audio signal 110, the first client device 102 may map each possible data value of the data set 130 to a set of attributes to be played in melody 120. For example, the first client device 102 may map a different symbol to each of the first 256 integer values (e.g., 0-255), such that each symbol represents one byte of information. If a different number of symbols are mapped, the amount of information represented by each symbol may be different. For example, if two distinct symbols are mapped, each symbol represents 1 bit of information (e.g., 0 or 1). The first client device 102 may reference a symbol map in order to convert data set 130 to melody 120. Upon receiving the audio signal 110, the second client device 104 may map the attributes of melody 120 to data values in order to decode the data set 130. That is, the second client device 104 may reference the symbol map in order to convert melody 120 to data set 130.

    [0027] The symbol map may be known to the first and second client devices 102 and 104 in advance, e.g., prior to the first client device 102 playing audio signal 110. In some implementations, the audio signal 110 may be utilized by the first client device 102 to indicate the symbol map to the second client device 104. For instance, the first client device 102 may include data that indicates the symbol map in a header portion of the audio signal 110, e.g., a portion of audio signal 110 that precedes melody 120. This data may include an identifier of a pre-defined symbol map known to both client devices, or may include a representation of a new symbol map chosen by the first client device 102.

    [0028] Melody 120 may be made up of measures 122, 124, 126, and 128a, which represent different segments of time and correspond to a specific number of beats. A series of musical chord changes may occur throughout measures 122, 124, 126, and 128a such that melody 120 may follow a particular chord progression.

    [0029] In some implementations, melody 120 may follow a chord progression as dictated by the symbol map. The particular chord progression exhibited by melody 120 in the example of FIG. 1 is I-IV-V7-I in the key of C major. In this example, measures 122, 124, 126, and 128a may correspond to chords I, IV, V7, and I, in the key of C major, respectively. Since melody 120 is in the key of C major in this example, chord I corresponds to the musical notes of C-E-G, chord IV corresponds to the musical notes of F-A-C, and chord V7 corresponds to the musical notes of F-G-B-D.

    [0030] The symbol map may define the structure of melody 120. In this example, the symbol map may require that, for measure 122, data values be encoded as an ordered sequence of musical notes or chords that are selected from chord I, e.g., C, E, and G. The particular pitch, duration, and order of the sequence of notes included in measure 122 may, for example, be representative of the data value 132. That is, the second client device 104 may identify the portion of audio signal 110 corresponding to measure 122 as the data value 132, which is the hexadecimal value "0xB6" in this example. The symbol map may enable the second client device 104 to perform this type of melody to data value conversion. Similarly, the first client device 102 may use the symbol map to determine the particular sequence of notes that are to be played in measure 122 so as to indicate data value 132.

    [0031] Continuing with the above example, data value 134 may be encoded in the portion of audio signal 110 corresponding to measure 124 as a particular sequence of the musical notes F, A, and C; data value 136 may be encoded in the portion of audio signal 110 corresponding to measure 126 as a particular sequence of the musical notes F, G, B, and D; and data value 138 may be encoded in the portion of audio signal 110 corresponding to measure 128a as a particular sequence of the musical notes C, E, and G.

    [0032] The relationship between each measure of melody 120 and the chord in which data is encoded in is conveyed in the symbol map associated with the audio signal 110. By following the symbol map, the second client device 104 may, for example, expect to receive a sequence of the musical notes F, A, and C from first client device 102 for measure 124. The sequence of musical notes played by the first client device 102 in the I-IV-V7-I chord progression of measures 122-128a, may represent the sequence of data values 132-138.

    [0033] In this way, the structure of melody 120, as defined by the symbol map, may provide an audio attribute to data value mapping that varies as a function of time. In some cases, the symbol map may also specify one or additional characteristics of the transmitted audio signal. For example, one or more Attack-Decay-Sustain-Release ("ADSR") parameters for the envelope of the transmitted audio signal may vary according the symbol map.

    [0034] In some implementations, envelope characteristics may dynamically change throughout melody 120 as indicated by the symbol map. This may, for instance, allow for melody 120 to emulate a song that is played using multiple instruments. The symbol map may also specify the key and tempo in which melody 120 is played. In addition, the symbol map may specify the time signature of melody 120, which is 4/4 in the example of system 100.

    [0035] The structure of melody 120 that is defined by the symbol map may be represented as "S" in following expression:


    where "i" is a symbol within melody 120, "K" is the chord from which notes may be selected to represent symbol i, "D" is the duration of symbol i, and "T" is the number of tones included in symbol i.

    [0036] In the example of FIG. 1, the chord K may correspond to the chord progression of melody 120. The chord K of the structure for each symbol may be described as:


    where each "fi" corresponds to the frequency of a note included in the chord for symbol i and "n" is the total number of notes in the chord. For example, if the chord K is a major chord or minor chord, which are both triads (e.g., chords made up of three notes), then n would be equal to 3.

    [0037] Similarly, if the chord K is a major seventh chord or a minor seventh chord, which are both tetrads (e.g., chords made up of four notes), then n would be equal to 4. For instance, symbols that are included in measure 126 may have a structure S with a chord K that may be described as:


    where F4 is a 4th octave F-note that has a 349.2 Hz pitch, G4 is a 4th octave G-note that has a 392 Hz pitch, B4 is a 4th octave B-note that has a 493.9 Hz pitch, and D5 is a 5th octave D-note that has a 587.3 Hz pitch. It can be noted that the K for symbols of measure 126 contains four note (e.g., n = 4), because it is a seventh chord. Since chord K may be defined on a symbol-by-symbol basis, it becomes possible to implement audio signal 110 with any desired chord progression.

    [0038] The duration D of each symbol most simply corresponds to amount of time of which the tone(s) of the symbol are to be played. Consider an example in which the symbol map specifies that melody 120 is to be played at a tempo of 120 beats per minute ("BPM"). In this scenario, an exemplary symbol 144, which is a half note, may have a duration of 1 second.

    [0039] The number of tones T indicates how many concurrent tones are to be played for each symbol. This parameter allows for symbols to be transmitted in the form of single notes, as well as full or partial chords. As described more fully below, these tones will also serve as a means by which various data encoding schemes may be implemented.

    [0040] By encoding data using audio signals that are musically-harmonious and dynamic, the techniques described herein provide for sound-based communications that nearby persons may find pleasant. Further, these techniques may allow for a great degree of customization. In some implementations, the symbol map may be determined based on one or more user-defined parameters.

    [0041] For instance, a user may able to select a chord progression for the symbol map. In the example described above, the user of first client device 102 may have interacted with an application running on the first client device 102 to specify that the website URL be transmitted using the chord progression of his favorite classic rock song. Such a selection would be reflected in the structure S of melody 120. In some examples, the Attack-Decay-Sustain-Release ("ADSR") envelope parameters may be user-defined. This may allow for the audio signal to be synthesized in a multitude of ways. In some implementations, envelope characteristics may dynamically change throughout the melody as indicated by the symbol map.

    [0042] The musical relationships between the notes and chords included in the symbols may also vary. In some cases, the symbol map may specify that tones from a particular scale in a particular key will be used to make up the various symbols. For example, a symbol map may specify that the C pentatonic major scale is to be used, meaning that symbols will be formed by selecting notes and chords from the notes of that scale (e.g., [C, D, E, G, A]). The symbol map may specify other scales in a particular key, such as, for example, a maj or scale, a natural minor scale, a harmonic minor scale, a melodic minor scale, a pentatonic minor scale, a blues scale, a particular mode of a base scale (e.g., the Phrygian mode of the C major scale), or other scales.

    [0043] FIG. 2 illustrates a conceptual diagram of an exemplary framework for providing a symbol map for use in communicating data between devices with audible harmonies in a system 200. The system 200 includes a symbol map 210, which may function in a manner similar to that of symbol maps described above in association with FIG. 1. The symbol map 210 may provide structures S1-SN which each correspond to a different symbol in a sequence of N symbols that are to be included in a melody produced in accordance with symbol map 210. Each symbol structure of the symbol map may specify various attributes (e.g., chord K, duration D, tones T, ADSR parameters, etc.) of its corresponding symbol. Additionally, the symbol map 210 provides the particular order in which the symbols are to occur in the melody.

    [0044] In the example of FIG. 1, a melody produced by the first client device 102 may conform to symbol map 210 and the second client device 104 may decode a captured melody according to symbol map 210. That is, the first client device 102 may reference a symbol map, such as symbol map 210, in order to convert a data set to a melody. Similarly, the second client device 104 may reference the same symbol map 210 in order to convert the melody of the captured audio signal to a data set.

    [0045] The symbol map 210 may be known to the first and second client devices 102 and 104 in advance, e.g., prior to the first client device 102 playing an audio signal. In some implementations, the audio signal produced by the first client device 102 may be utilized to indicate the symbol map to the second client device 104. For instance, the first client device 102 may include data that indicates the symbol map 210 in a header portion of the audio signal, e.g., a portion of audio signal 110 that precedes melody 120. This data may include an identifier of for symbol map 210, which may be known to both client devices, or may include a representation of symbol map 210 chosen by the first client device 102.

    [0046] The symbol map 210 may provide the structure for its melody at a symbol-level. For instance, the symbol map 210 may specify the chord K, duration D, and tone T, as described above, for each symbol of the melody. The symbol map 210 may also specify ADSR envelope parameters for each symbol. In this particular example, the symbol map 210 provides a structure for a melody in the key of C major at a tempo of 240 BPM. The chord progression exhibited by the melody specified by symbol map 210, which extends from symbol Si to symbol SN, may be viewed as a I-IV-V7-I chord progression.

    [0047] With symbol structure 212, for example, the symbol map 210 may specify that the third symbol of the melody be played for 0.5 seconds and include one or more of the tones C4, E4, and G4. Since all of the symbols specified by symbol map 210 are in the key of C major, a symbol having structure 212 (e.g., the third symbol in the melody) may be viewed as including one or more tones selected from a I chord. Similarly, a symbol having structure 212 may be viewed as a half note, as it has a duration of 0.5 seconds where the tempo of the melody provided by the symbol map 210 is 240 BPM. The particular combination of tones C4, E4, and G4 that are played as a symbol having structure 212 will, as described in more detail in the discussion of FIGS. 3a-4b below, depend on the particular one or more data values that are to be conveyed with that symbol.

    [0048] With symbol structure 214, for example, the symbol map 210 may specify that the eleventh symbol of the melody be played for 0.125 seconds and include one or more of the tones F4, G4, B4, and D4. Since all of the symbols specified by symbol map 210 are in the key of C major, a symbol having structure 214 (e.g., the eleventh symbol in the melody) may be viewed as including one or more tones selected from a V7 chord. Similarly, a symbol having structure 214 may be viewed as an eighth note, as it has a duration of 0.125 seconds where the tempo of the melody provided by the symbol map 210 is 240 BPM. The particular combination of tones F4, G4, B4, and D4 that are played as a symbol having structure 214 will, as described in more detail in the discussion of FIGS. 3a-4b below, depend on the particular one or more data values that are to be conveyed with that symbol.

    [0049] In some examples, the symbol map 210 may include at least one free portion 216a-b within the melody structure. A free portion 216 within the melody as provided by the symbol map 210 may be a portion of the melody for which a symbol representing a data value is not specified. Rather than being representative of data values, such free portion 216 may be implemented within the melody for purposes of musical aesthetics. In the example of FIG. 1, the second client device 104 may determine, from symbol map 210, that a portion of the melody played by the first client device 102 corresponding to free portion 216 does not encode a data value. Accordingly, the second client device 104 may simply not perform decoding processes for free portion 216.

    [0050] In some examples, audio may be played during the free portion 216 of the melody. In these examples, the audio played may not explicitly represent data values, but rather just be provided for musical aesthetics. Free portion 216 could, for instance, include drum or vocal tracks. In some implementations, the free portion 216 may include tones similar to those included in symbols specified by symbol map 210. In other examples, the symbol map 210 may include at least one free portion 216 as a rest so as to create rhythm within the melody. In reference to FIG. 1, the speaker of first client device 102 may, in these examples, be silent for the duration of free portion 216. The symbol map 210 may specify duration information for each of one or more free portions 216.

    [0051] FIG. 3a illustrates an example of a system 300a for communicating data between devices with audible harmonies. The system 300a may function in a manner similar to that of system 100 which has been described above in association with FIG. 1. Accordingly, system 300a includes first and second client devices 102 and 104.

    [0052] In operation, the first client device 102 may transmit a data set 330a to the second client device 104 by, for example, playing an audio signal 310a having a melody 320a through a speaker that is included in the first client device 102. The second client device 104 may record the audio signal 310a using a microphone that is included in the second client device 104 and decode the data set 330a from the recording.

    [0053] The melody 320a produced by the first client device 102 may conform to symbol map 312a and the second client device 104 may decode captured melody 320a according to symbol map 312a. That is, the first client device 102 may reference symbol map 312a in order to convert data set 330a to melody 320a. Similarly, the second client device 104 may reference the same symbol map 312a in order to convert the melody 320a of the captured audio signal 310a to data set 330a.

    [0054] Melody 320a may be made up of measures 322a, 324a, 326a, and 328a, which represent different segments of time and correspond to a specific number of beats. As outlined in the symbol map 312a, a series of musical chord changes may occur throughout measures 322a, 324a, 326a, and 328a such that melody 320a may follow a particular chord progression. In this example, it can be seen that at least some of symbols included in each of measures 322a, 324a, 326a, and 328a are multi-tonal.

    [0055] In some implementations, the number of tones defined for each symbol may be an exact number of tones that are to be included in the symbol. In the example of system 300a, measures 322a and 328a may have a T-value of 2, while measure 326a has a T-value of 3. In these implementations, different combinations of tones may represent different data values. The number of distinct values that can be encoded by the ith symbol, vi, may be described as:


    where "ni" is the total number of notes in the chord Ki and Ti is the exact number of tones that are to be included in the ith symbol. Each combination of tones may, for instance, correspond to a different data value.

    [0056] For example, consider symbol for which symbol map 312a has specified a chord K that is a triad (e.g., n=3) and a T-value of 2. This exemplary symbol may, for example, be found in one of measures 322a, 324a, and 328a. The number of distinct values that can be encoded by this exemplary symbol would be 3. In other words, there are three possible combinations of tones that may be played concurrently as this symbol.

    [0057] In some implementations, a symbol may also be represented as an absence of tones. If an absence of tones (e.g., a rest), were to be considered a symbol that is a fourth combination of tones, then each symbol under the scheme of the abovementioned example may be representative of two bits of data. Symbol S8a, for example, may be considered such a symbol that is represented as an absence of tones. It is to be understood that symbol S8a is different than, for example, a free portion such as free portion 314a or that which has been described above in association with FIG. 2, as symbol S8a is representative of a distinct value.

    [0058] In another example, consider a symbol for which a symbol map has specified a chord K that is a tetrad (e.g., n=4) and a T-value of 3. This exemplary symbol may, for example, be found in measure 326a. The number of distinct values that can be encoded by this exemplary symbol would be 4. In other words, there are four possible combinations of tones that may be played concurrently as this symbol. In this way, each symbol included in measure 326a may be viewed as being representative of two bits of data. Similarly, the sequence of four symbols included in measure 326a may viewed as being representative of 8 bits of data.

    [0059] As with all audio attributes, the T-value may vary from symbol-to-symbol as defined by the symbol map 312a. For example, it can be seen that symbol S6a, which is included as the second symbol in measure 324a and the sixth symbol in melody 320a, includes only a single note, C5. While the symbol map 312a may have specified a T-value of 2 for the other symbols within measure 324a, e.g., S5a, S7a, and S8a, the symbol map 312a may have specified a T-value of 1 for symbol S6a. Given that a IV chord is a triad, symbol S6a may be capable of representing one of three distinct values.

    [0060] In the example of system 300a, the portions of melody 320a corresponding to measures 322a, 324a, 326a, and 328a may correspond to data values 332a, 334a, 336a, and 338a, respectively. That is, sequences of symbols corresponding to structures S1a-S4a, S5a-S8a, S9a-S12a, and S13a-S15a may be viewed as corresponding to data values 332a, 334a, 336a, and 338a, respectively.

    [0061] FIG. 4a is a table 400a for an exemplary mapping of symbols to distinct values according to the example described in association with FIG. 3a. The table 400a illustrates how an exemplary symbol structure 410a, which may be specified by the symbol map described above in reference to FIG. 3a, can be utilized to represent different data values. A symbol which conforms to symbol structure 410a may, for example, include two tones (e.g., T=2) selected from the chord of C major (e.g., K = {C4, E4, G4}).

    [0062] In accordance with implementations described above reference to FIG. 3a, it is understood that symbol structure 410a may provide for at least three distinct values. In other words, there are three different possible pairings of tones within a triad. These three distinct values are depicted in FIG. 4a as distinct values 412a, 414a, and 416a. The combination of tones representing each of the distinct values are indicated in table 400a in both integer and musical notations 422a-424a. The table 400a further includes frequency peak information 426a that indicates the frequency components of the combinations of tones corresponding to each distinct value. In some implementations in which a symbol may also be represented as an absence of tones, the exemplary mapping of symbols to distinct values may further include a distinct value 418a corresponding to a rest.

    [0063] It can be understood that various messages can be conveyed in melodies through ordered sequences of symbols which are each representative of distinct values. In the example of FIG. 3a, the ordered sequence of symbols included in measure 322a may correspond to data value 322a. Consider an example in which the symbol map 312a defines each of the symbol structures S1a, S2a, S3a, and S4a to have symbol structure 410a (e.g., K = {C4, E4, G4} and T=2). In this example, the first symbol of measure 322a may, for instance, be viewed as having symbol structure S1a and representing distinct value 416a. That is, the first symbol of measure 322a corresponds to distinct value 416a in that it includes the notes of C4 and G4. It follows that the third symbol of measure 322a may, in this example, also be viewed as representing distinct value 416a. Similarly, the second and fourth symbols of measure 322a may be viewed as having symbol structures S2a and S4a representing distinct values 414a and 412a, respectively.

    [0064] FIG. 3b illustrates an example of a system 300b for communicating data between devices with audible harmonies. The system 300b may function in a manner similar to that of system 100 which has been described above in association with FIG. 1. Accordingly, system 300b includes first and second client devices 102 and 104.

    [0065] In operation, the first client device 102 may transmit a data set 330b to the second client device 104 by, for example, playing an audio signal 310b having a melody 320b through a speaker that is included in the first client device 102. The second client device 104 may record the audio signal 310b using a microphone that is included in the second client device 104 and decode the data set 330b from the recording.

    [0066] The melody 320b produced by the first client device 102 may conform to symbol map 312b and the second client device 104 may decode captured melody 320b according to symbol map 312b. That is, the first client device 102 may reference symbol map 312b in order to convert data set 330b to melody 320b. Similarly, the second client device 104 may reference the same symbol map 312b in order to convert the melody 320b of the captured audio signal 310b to data set 330b.

    [0067] Melody 320b may be made up of measures 322b, 324b, 326b, and 328b, which represent different segments of time and correspond to a specific number of beats. As outlined in the symbol map 312b, a series of musical chord changes may occur throughout measures 322b, 324b, 326b, and 328b such that melody 320b may follow a particular chord progression. In this example, it can be seen that symbols included in each of measures 322b, 324b, 326b, and 328b have varying numbers of tones.

    [0068] In some implementations, the number of tones defined for each symbol may correspond to a maximum number of tones that may be played concurrently. A minimum number of tones that may be played as a symbol may also be considered. While the number of distinct values vi in the example described above in association with FIG. 3a may be equal to the binomial coefficient indexed by ni and Ti, the number of distinct values vi for implementations where the number of tones T corresponds to a maximum number of tones that may be played concurrently may be seen as a sum of binomial coefficients indexed by ni and each number of tones from the minimum number of tones to the maximum number of tones.

    [0069] In some examples, the maximum number of tones that may be played concurrently may be equal to n (e.g., the number of tones included in chord K) and the minimum number of tones that may be played concurrently may be equal to zero (e.g., a rest). In these examples, each symbol may be viewed as being representative of n bits of data.

    [0070] For example, consider a symbol for which a symbol map has specified a chord K that is a triad (e.g., n=3), the maximum number of tones is equal to n, and the minimum number of tones is equal to zero. This exemplary symbol may, for example, be found in one of measures 322b, 324b, and 328b. The number of distinct values that can be encoded by this exemplary symbol would be 8. In other words, there are eight possible combinations of tones that may be played concurrently as this symbol. In this way, each symbol included in measures 322b, 324b, and 328b may, for instance, be viewed as being representative of three bits of data.

    [0071] In another example, consider a symbol for which a symbol map has specified a chord K that is a tetrad (e.g., n=4), the maximum number of tones is equal to n, and the minimum number of tones is equal to zero. This exemplary symbol may, for example, be found in measure 326b. The number of distinct values that can be encoded by this exemplary symbol would be 16. In other words, there are sixteen possible combinations of tones that may be played concurrently as this symbol. In this way, each symbol included in measure 326b may be viewed as being representative of four bits of data. Similarly, the sequence of four symbols included in measure 326b may viewed as being representative of 16 bits of data. Although zero and n have been described as exemplary minimum and maximum numbers of tones in reference to FIG. 3b, it is to be understood that the techniques described herein may be implemented using any desired value(s) as minimum and maximum numbers of tones.

    [0072] In the example of system 300b, the portions of melody 320b corresponding to measures 322b, 324b, 326b, and 328b may correspond to data values 332b, 334b, 336b, and 338b, respectively. That is, sequences of symbols corresponding to structures S1b-S4b, S5b-S7b, S8b-S11b, and S12b-S15b may be viewed as corresponding to data values 332b, 334b, 336b, and 338b, respectively.

    [0073] FIG. 4b is a table 400b for an exemplary mapping of symbols to distinct values according to the example described in association with FIG. 3b. The table 400b illustrates how an exemplary symbol structure 410b, which may be specified by the symbol map described above in reference to FIG. 3b, can be utilized to represent different data values. A symbol which conforms to symbol structure 410b may, for example, include from zero to three tones (e.g., n=3) selected from the chord of C major (e.g., K = {C4, E4, G4}).

    [0074] In accordance with implementations described above reference to FIG. 3b, it is understood that symbol structure 410b may provide for up to eight distinct values. These eight distinct values are depicted in FIG. 4b as distinct values 412b-426b. The combination of tones representing each of the distinct values are indicated in table 400b in both integer and musical notations 422b-424b. The table 400b further includes frequency peak information 426b that indicates the frequency components of the combinations of tones corresponding to each distinct value.

    [0075] It can be understood that various messages can be conveyed in melodies through ordered sequences of symbols which are each representative of distinct values. In the example of FIG. 3b, the ordered sequence of symbols included in measures 322b and 328b may correspond to data values 322b and 328b, respectively. Consider an example in which the symbol map 312b defines each of the symbol structures S1b, S2b, S3b, and S4b (e.g., corresponding to measure 322b) and S12b, S13b, S14b, and S15b (e.g., corresponding to measure 322b) to have symbol structure 410b (e.g., K = {C4, E4, G4} and 0≤T≤3). In this example, the first symbol of measure 322b may, for instance, be viewed as having symbol structure S1b and representing distinct value 426b. That is, the first symbol of measure 322b corresponds to distinct value 416b in that it includes the notes of C4, E4, and G4. It follows that the second, third, and fourth symbols of measure 322b may be viewed as having symbol structures S2b, S3b, and S4b representing distinct values 416b, 422b, and 424b, respectively. Similarly, the first, second, third, and fourth symbols of measure 326b may be viewed as having symbol structures S12b, S13b, S14b, and S15b representing distinct values 422b, 412b, 418b, and 424b, respectively.

    [0076] It is clear that the amount of data that can be conveyed by each symbol may depend, at least in part, upon the values of n and T. The symbol map may, for instance, be determined at least in part on a desired amount of data to be encoded by each symbol, measure, or melody.

    [0077] FIG. 5 illustrates an example of a sequence in which data is transferred with audible harmonies in a system 500. The system 500 may function in a manner similar to that of systems 100, 300a, and/or 300b which have been described above in association with FIGS. 1, 3a, and 3b. More particularly, the diagram depicts first client device 102 and second client device 104. In operation, the first client device 102 may transmit a data set to the second client device 104 by, for example, playing an audio signal that is encoded with the data set through a speaker that is included in the first client device 102. The second client device 104 may record the audio signal using a microphone that is included in the second client device 104 and decode the data set from the recording.

    [0078] Initially, the first client device 102 may determine the symbol map to be utilized in transmitting the audio signal. As described above, the symbol map selected may be influenced at least in part by user-defined parameters.

    [0079] In some implementations, the symbol map may be selected based at least in part by the time at which the data transfer between the first client device 102 and the second client device 104 is taking place. For example, there may be a predefined schedule of symbol maps to be selected for use at different times. In this example, both the first client device 102 and second client device 104 are configured to utilize one symbol map for communications that occur between 10:05 and 10:06 AM, and utilize another symbol map for communications that occur between 10:06 and 10:07 AM. In this way, the symbol map may be consistently changing over time. Such scheduling information may, for instance, be made available to by one or more cloud computing devices that are in communication with the first and second client devices 102 and 104. In some implementations, the first and second client devices 102 and 104 may be pre-programmed with the scheduling information described above. One or more other external factors may, in some implementations, influence the particular symbol map that is to be utilized for a data transfer.

    [0080] The second client device 104 may identify the symbol map to be utilized in the data transfer with first client device 102 (510). As described above, the second client device 104 may identify the symbol map by referencing scheduling information that indicates a particular symbol map to be utilized for the time at which 510 occurs. In some implementations, the first client device 102 may indicate which symbol map is to be utilized in a header portion included in the transmitted audio signal. For instance, the first client device 102 may transmit audio encoded with data that specifies the particular symbol map that is to be used to decode melody 520.

    [0081] Upon receiving such data specifying the particular symbol map, the second client device 104 may select the corresponding symbol map in preparation for receiving and decoding an incoming melody 520. In some implementations, this header portion may be included in a portion of the audio signal that immediately precedes melody 520. In some examples, the first client device 102 and second client device 104 may have an otherwise agreed-upon symbol map that both devices intend to use in transferring data. Such an agreed-upon symbol map may be one which, for instance, is hard-coded by both devices.

    [0082] The symbol map utilized in this example may include the melody 520 following an I-IV -V7-I chord progression in the key of C major. Since the first client device 102 adheres to this symbol map in playing melody 520 for the second client device 104, the first client device 102 may, for the duration of measure 530, transmit symbols mapped onto a C chord (532).

    [0083] The second client device 104 may receive and decode the portion of melody 520 that corresponds to measure 530 using a C filter bank (534). That is, the second client device 104 may extract data values encoded in measure 530 by specifically considering occurrences of musical notes C, E, and G, e.g., the musical notes included in a C chord.

    [0084] For measure 540, the first client device 102 may transmit symbols mapped onto an F chord (542). That is, the chord progression included in the symbol map specifies an I to IV chord change corresponding to a transition from measure 530 to measure 540. In other words, the first client device 102 may encode data in measure 540 using an ordered sequence of the musical notes F, A, and C, e.g., the musical notes included in an F chord. The second client device 104 may correspondingly decode received data using an F chord filter bank (544).

    [0085] For measure 550, the first client device 102 may transmit symbols mapped onto a G7 chord (552). That is, the chord progression included in the symbol map specifies a IV to V7 chord change corresponding to a transition from measure 540 to measure 550. In other words, the first client device 102 may encode data in measure 550 using an ordered sequence of the musical notes F, G, B, and D, e.g., the musical notes included in a G7 chord. The second client device 104 may correspondingly decode received data using a G7 chord filter bank (554).

    [0086] For measure 560, the first client device 102 may transmit symbols mapped onto a C chord (562). That is, the chord progression included in the symbol map specifies a V7 to I chord change corresponding to a transition from measure 550 to measure 560. In other words, the first client device 102 may encode data in measure 560 using an ordered sequence of the musical notes C, E, and G, e.g., the musical notes included in a C chord. The second client device 104 may correspondingly decode received data using a C chord filter bank (564).

    [0087] FIG. 6 is a conceptual diagram of an exemplary framework for decoding data from audible harmonies in a system 600. The system 600 may, for example, be similar to that of the second client device 104 that has been described above in association with FIG. 5. That is, system 600 may be representative of a configuration for a device that receives and decodes audio signals according to a symbol map.

    [0088] In some implementations, system 600 may be representative of a configuration for a digital signal processor that may be utilized by a device in receiving and decoding audio signal according to a symbol map. The system 600 may include a set of filters 602. For example, the second client device 104 may utilize one or more of the filters in set 602 to isolate individual notes included in a melody 610 that is produced by the first client device 102. Each filter included in the set of filters 602 may be a bandpass filter that has a center frequency that corresponds to each semitone included in at least a portion of the musical scale.

    [0089] FIG. 6, for example, depicts set of filters 602 as including a bandpass filter for each semitone ranging from C4 to D5. The particular bandpass filter utilized to isolate C4, for example, may have a center frequency of 261.6 Hz. Similarly, the particular bandpass filter utilized to isolate Ds, for example, may have a center frequency of 587.3 Hz. While set of filters 602 is depicted as including filters for semitones in the 4th and 5th octaves, it is clear that the set of filters utilized may include filters for any desired number of octaves or portions thereof.

    [0090] The set of filters depicted at 612 may represent the particular subset of bandpass filters that may be utilized in the decoding of an I chord. Since melody 610 is in the key of C major in this example, the set of filters 612 may include bandpass filters with center frequencies of 261.6 Hz, 329.6 Hz, and 392 Hz. That is, the set of filters 612 may be considered to be a C chord filter bank that isolates musical notes C4, E4, and G4. The set of filters 612 may, for instance, depict the C filter bank as utilized by the second client device 104 in decoding data received at measures 530 and 560 of FIG. 5.

    [0091] Referring again to FIG. 5, the set of filters 612 may be representative of a configuration of the second client device 104 employed at 534 and 564 according to the symbol map identified at 510. The set of filters 612 may, for instance, represent a configuration for filtering one or more portions of melody 610 with a structure that has been defined by the symbol map to have a chord K that can be described as:



    [0092] The set of filters depicted at 614 may represent the particular subset of bandpass filters that may be utilized in the decoding of a IV chord. Since melody 610 is in the key of C major in this example, the set of filters 614 may include bandpass filters with center frequencies of 349.2 Hz, 440 Hz, and 523.3 Hz. That is, the set of filters 614 may be considered to be an F chord filter bank that isolates musical notes F4, A4, and C5. The set of filters 614 may, for instance, depict the F filter bank as utilized by the second client device 104 in decoding data received at measure 540 of FIG. 5.

    [0093] Referring again to FIG. 5, the set of filters 614 may be representative of a configuration of the second client device 104 employed at 544 according to the symbol map identified at 510. The set of filters 614 may, for instance, represent a configuration for filtering one or more portions of melody 610 with a structure that has been defined by the symbol map to have a chord K that can be described as:



    [0094] The set of filters depicted at 616 may represent the particular subset of bandpass filters that may be utilized in the decoding of a V7 chord. Since melody 610 is in the key of C major in this example, the set of filters 616 may include bandpass filters with center frequencies of 349.2 Hz, 392 Hz, 493.9 Hz, and 587.3 Hz. That is, the set of filters 616 may be considered to be an F chord filter bank that isolates musical notes F4, G4, B4, and D5. The set of filters 616 may, for instance, depict the G7 filter bank as utilized by the second client device 104 in decoding data received at measure 550 of FIG. 5.

    [0095] Referring again to FIG. 5, the set of filters 616 may be representative of a configuration of the second client device 104 employed at 554 according to the symbol map identified at 510. The set of filters 616 may, for instance, represent a configuration for filtering one or more portions of melody 610 with a structure that has been defined by the symbol map to have a chord K that can be described as:



    [0096] It can be noted that, because the chord K corresponds to a seventh chord, or tetrad, the set of filters 616 may include four filters.

    [0097] The set of filters 602 may be dynamically reconfigurable so as to enable the use of filter banks that correspond to any chord desirable. In some implementations, the filters included in set of filters 602 may be digital filters. The filters included in set of filters 602 may include Finite Impulse Response ("FIR") filters, Infinite Impulse Response ("IIR") filters, or a combination thereof.

    [0098] In some implementations, the set of filters 602 may be implemented using one or more Fourier analysis techniques. For instance, the second client device 104 may take one or more Short-time Fourier Transforms ("STFT") of the audio signals that it picks-up using a microphone. Each STFT may be evaluated to determine which, if any, of the expected tones of which the respective STFT might indicate the presence. Such a determination may be made by analyzing magnitude information, e.g., peak estimation, in each STFT at each of the abovementioned center frequencies that correspond to notes of chords.

    [0099] One or more digital signal processing processes may be performed in order to isolate audio signal data corresponding to each appropriate musical note and/or make determinations regarding which notes are played at a given point in melody 610. In some implementations, the filters in included in set of filters 602 may be analog filters. Dynamic reconfiguration of such filters may be performed using a network of switches to activate and deactivate the necessary filters. Reconfiguration may also be performed in post-processing of the output(s) of such filter circuitry. In some implementations, the filters utilized in the set of filters 602 may be a combination of analog and digital filtering techniques.

    [0100] FIG. 7 is a flow chart illustrating an example of a process 700 for communicating data using audible harmonies. The process 700 may be performed by a device such as the first client device 102 which has been described above in association with FIGS. 1-6. However, the process 700 may be performed by other systems or system configurations.

    [0101] The first client device 102 determines a set of audio attribute values to be modulated to transfer a data set to the second client device 104 (710). The set of audio attribute values may, for instance, be selected based on a musical relationship between the audio attribute values.

    [0102] In some implementations, these audio attribute values may be pitch values to be played in a melody. In these implementations, the musical relationship between the pitch values may be a chordal relationship between the pitches. Such a chordal relationship between the pitches may be a major chord relationship, a minor chord relationship, a major seventh chord relationship, or a minor seventh chord relationship. For example, an I chord would correspond to a major chord relationship, while a V7 chord would correspond to a major seventh chord relationship. In the examples described above, such a chordal relationship may correspond to the relationship between the pitches that are contained in a chord K.

    [0103] In some implementations, the set of audio attributes may be duration values and the musical relationship is a rhythmic relationship between the durations. In the examples described above, such duration values may correspond to duration(s) D and such a rhythmic relationship may correspond to one or more sequences of audio of particular duration(s). The rhythmic relationship might also correspond to a tempo or time signature defined for a melody. In some implementations, the set of audio attribute values are envelope shape parameter values. These may be ADSR values, for example, such as attack time, decay time, sustain level, and release level.

    [0104] The first client device 102 determines a symbol map associating each possible data value for the data set with an ordered sequence of audio attribute values from the set of audio attribute values (720). The symbol map determined by the first client device 102 may effectively define an encoding of data values onto an audio attribute structure. When considering data values on the scale of bytes, the symbol map can be seen as defining a mapping from a radix-256 data value to a mixed-radix representation.

    [0105] The symbol map includes a chord progression that includes a plurality of sets of pitch values, and timing information indicating when each set of pitch values will be used. Each set of pitch values is selected based on a chordal relationship between the pitch values in the set. Such a chordal relationship may be similar to that which has been described above in association with 710. In this way, the symbol map may, at least in part, define a melody encoded with data values that is to be played by the first client device 102.

    [0106] The first client device 102 sends the data set to one or more receiving devices, such as the second client device 104. For this step, the first client device 102 may determine, for each data value in the data set, the ordered sequence of audio attribute values associated with the data value from the symbol map (730) and play an ordered sequence of sounds representing a data value (740), with each sound having an audio attribute value in the determined ordered sequence of audio attribute values. This process may be similar to that which has been described above in reference to the first client device 102 playing an audio signal that includes a melody using one or more electroacoustic transducers, such as a speaker.

    [0107] In some implementations, the symbol map is stored by the one or more receiving devices, such as the second client device 104, before the data set is sent. In these implementations, the one or more receiving device may store a plurality of different symbol maps, and sending the data set may include sending a header including an identifier of a particular symbol map to be used when transferring the data set.

    [0108] In some implementations, the process of sending the data set may include sending a header between the sending device, such as the first client device 102, and the receiving devices, such as the second client device 104, that includes the symbol map. The sending device, for example, may send the symbol map to the receiving devices by playing an ordered sequence of sounds representing each data value in the symbol map based on a default symbol map. In this way, new and unique symbol maps can be implemented on-the-fly. This may allow for a great degree of customization.

    [0109] FIG. 8 is a flow chart illustrating an example of a process 800 for communicating data using audible harmonies. The process 800 may be performed by a device such as the second client device 104 which has been described above in association with FIGS. 1-7. However, the process 800 may be performed by other systems or system configurations.

    [0110] The second client device identifies a symbol map associating each data value in a data set with an ordered sequence of audio attribute values (810). The set of audio attribute values may, for instance, be selected based on a musical relationship between the audio attribute values.

    [0111] In some implementations, these audio attribute values may be pitch values to be played in a melody. In these implementations, the musical relationship between the pitch values may be a chordal relationship between the pitches. Such a chordal relationship between the pitches may be a major chord relationship, a minor chord relationship, a major seventh chord relationship, or a minor seventh chord relationship. For example, an I chord would correspond to a major chord relationship, while a V7 chord would correspond to a major seventh chord relationship. In the examples described above, such a chordal relationship may correspond to the relationship between the pitches that are contained in a chord K.

    [0112] In some implementations, the set of audio attributes may be duration values and the musical relationship is a rhythmic relationship between the durations. In the examples described above, such duration values may correspond to duration(s) D and such a rhythmic relationship may correspond to one or more sequences of audio of particular duration(s). The rhythmic relationship might also correspond to a tempo or time signature defined for a melody. In some implementations, the set of audio attribute values are envelope shape parameter values. These may be ADSR values, for example.

    [0113] The second client device 104 receives a plurality of sounds from a sending device (820), such as the first client device 102. This process may be similar to that which has been described above in reference to the second client device 104 receiving an audio signal that includes a melody by using one or more acoustic-to-electric transducers, such as a microphone.

    [0114] The second client device 104 identifies ordered sequences of the received sounds that have audio attribute values associated with data values in the symbol map (830). The symbol map utilized by the second client device 104 effectively defines how data values are to be deduced from a received melody. The symbol map includes a chord progression that includes a plurality of sets of pitch values, and timing information indicating when each set of pitch values will be used. Each set of pitch values is selected based on a chordal relationship between the pitch values in the set. Such a chordal relationship may be similar to that which has been described above in association with 810.

    [0115] The second client device 104 assembles the data values according to an order in which the identified sequences were received to form the data set (840). That is, the second client device 104 may chain together the data conveyed by each symbol included in the melody played by the first client device 102.

    [0116] In some implementations, the second client device 104 may receive a header from the sending device, such as the first client device 102, which includes an identifier of a particular symbol map to be used when transferring the data set. In these implementations, the second client device 104 may store a plurality of different symbol maps and use the identifier included in the header sent by the first client device to select the particular symbol map to be used.

    [0117] In some implementations, the process of receiving the plurality of sounds from the sending device, such as the first client device 102, includes receiving a header from the sending device including the symbol map. In these implementations, the process of receiving the symbol map may include receiving an ordered sequence of sounds representing each data value in the symbol map based on a default symbol map. In this way, new and unique symbol maps can be implemented on-the-fly. This may allow for a great degree of customization.

    [0118] Referring again to customization features provided by the techniques described herein, in some implementations, one or more users of the first and second client devices 102 and 104 may create their own "signature" sound that is to be utilized for audible data transfer. For instance, one or more applications running on the first or second client devices 102 and 104 may provide users with tools to create and modify a melody structure, S, on top of which data values are encoded.

    [0119] In these implementations, users may be able to define the characteristics of the melody at an individual-parameter-level. In some implementations, a musical interface may be provided to allow a user to create and store structures. Such a musical interface may, for instance, function as a digital audio workstation and may include one or more of a MIDI keyboard, synthesizer software for instrument emulation and timbre generation, and music composition software.

    [0120] In some implementations, the user may be able to select from among a predetermined set of structures. Each melody structure, S, may be a unique configuration of parameters, e.g., chords K, durations D, number of tones T, and envelope/ADSR. In some examples, there may be multiple predetermined presets for different parameters that each user may be able to customize.

    [0121] For example, there may be multiple chord progression presets that a user may be able to choose from. In this example, popular chord progressions such as I-V-vi-IV, I-vi-IV-V, I-V-vi-iii-IV-I-IV-I-IV-V, I-I-I-I-IV-IV-I-I-V-V-I-I, ii-IV-V, I-IV-V-IV, V-IV-I, vi-V-IV-III, and ii-I-V/vii-bVII may be provided for user selection. Each user may also be able to easily select the key in which a melody is to be played in, as well as other parameters, such as tempo and time signature.

    [0122] Such structures may be associated with one or more of a graphical and textual description of the structure. For example, some structures may emulate voices. The graphical and textual descriptions associated with such structures may, for example, be representative of a different fictional character to which the particular "voice" belongs. In one example, a user may be able to select one of a "sad robot," a "happy robot," and a "crazy robot," as the voice in which data may be transferred. In this way, users may be given the ability to express themselves through this audible medium. In some examples, data that indicates a selected voice may be included in header information, such as that which has been described above, to encode the choice of customization that is used for the structure of the rest of the signal.

    [0123] In some implementations, the different structures described above may be made available through one or more applications that are running on the sending and/or receiving devices. For example, users may be able to download new structures to be used for making data transfers. In some examples, users may be able to create new structures and share them with others. In these examples, a receiving device, such as the second client device 104, may be able to store newly-received symbol maps so that a user of the second client device 104 may be able to later use a structure used by their friend. Consider the example described above in association with FIG. 1. In this example, the structure for the favorite classic rock song may have been created by the user of the first client device 104. Accordingly, the structure for the favorite classic rock song may be indicated in a symbol map included in a header that is sent from the first client device 102 to the second client device 104.

    [0124] Upon receiving the header, the second client device 104 may use the symbol map to decode the melody which follows the header and may further store the symbol map so that the user of the second client device 104 may be able to send data to others to the tune of the favorite classic rock song. In addition, the first and second client devices 102 and 104 may be able to simply provide an identifier corresponding to the symbol map for the favorite classic rock song after it has been stored at both devices. In some implementations, device users may be able to collaborate in their creation of melody structures.

    [0125] Although major and minor triads and seventh chords have been described in examples above, it is clear that the techniques described herein may utilize any harmonic set of notes as a chord. For example, the techniques described herein may utilize harmonic sets of notes such as a chords that may be described as being one or more of a major triad, minor triad, augmented triad, diminished triad, diminished seventh, half-diminished seventh, minor seventh, minor major seventh, dominant seventh, major seventh, augmented seventh, augmented major seventh, dominant ninth, dominant eleventh, dominant thirteenth, seventh augmented fifth, seventh flat ninth, seventh sharp ninth, seventh augmented eleventh, seventh flat thirteenth, half-diminished seventh, add nine, add fourth, add sixth, six-nine, mixed-sixth, sus2, sus4, and Jazz sus. In some examples, the audio signals described above may include one or more of a treble and bass section.

    [0126] In some implementations, the music played in such data transfers may include additional musical components. This might include additional notes that are played for aesthetical purposes, but may not be representative of encoded data values. Such musical components may be simply filtered out or otherwise ignored by the receiving device when decoding data values.

    [0127] FIG. 9 shows an example of a computing device 900 and a mobile computing device 950 that can be used to implement the techniques described here. The computing device 900 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The mobile computing device 950 is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart-phones, and other similar computing devices. The components shown here, their connections and relationships, and their functions, are meant to be examples only, and are not meant to be limiting.

    [0128] The computing device 900 includes a processor 902, a memory 904, a storage device 906, a high-speed interface 908 connecting to the memory 904 and multiple high-speed expansion ports 910, and a low-speed interface 912 connecting to a low-speed expansion port 914 and the storage device 906. Each of the processor 902, the memory 904, the storage device 906, the high-speed interface 908, the high-speed expansion ports 910, and the low-speed interface 912, are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate.

    [0129] The processor 902 can process instructions for execution within the computing device 900, including instructions stored in the memory 904 or on the storage device 906 to display graphical information for a graphical user interface (GUI) on an external input/output device, such as a display 916 coupled to the high-speed interface 908. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices may be connected, with each device providing portions of the necessary operations, e.g., as a server bank, a group of blade servers, or a multi-processor system.

    [0130] The memory 904 stores information within the computing device 900. In some implementations, the memory 904 is a volatile memory unit or units. In some implementations, the memory 904 is a non-volatile memory unit or units. The memory 904 may also be another form of computer-readable medium, such as a magnetic or optical disk.

    [0131] The storage device 906 is capable of providing mass storage for the computing device 900. In some implementations, the storage device 906 may be or contain a computer-readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. Instructions can be stored in an information carrier. The instructions, when executed by one or more processing devices, for example, processor 902, perform one or more methods, such as those described above. The instructions can also be stored by one or more storage devices such as computer- or machine-readable mediums, for example, the memory 904, the storage device 906, or memory on the processor 902.

    [0132] The high-speed interface 908 manages bandwidth-intensive operations for the computing device 900, while the low-speed interface 912 manages lower bandwidth-intensive operations. Such allocation of functions is an example only. In some implementations, the high-speed interface 908 is coupled to the memory 904, the display 916, e.g., through a graphics processor or accelerator, and to the high-speed expansion ports 910, which may accept various expansion cards (not shown). In the implementation, the low-speed interface 912 is coupled to the storage device 906 and the low-speed expansion port 914. The low-speed expansion port 914, which may include various communication ports, e.g., USB, Bluetooth, Ethernet, wireless Ethernet, may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.

    [0133] The computing device 900 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server 920, or multiple times in a group of such servers. In addition, it may be implemented in a personal computer such as a laptop computer 922. It may also be implemented as part of a rack server system 924. Alternatively, components from the computing device 900 may be combined with other components in a mobile device (not shown), such as a mobile computing device 950. Each of such devices may contain one or more of the computing device 900 and the mobile computing device 950, and an entire system may be made up of multiple computing devices communicating with each other.

    [0134] The mobile computing device 950 includes a processor 952, a memory 964, an input/output device such as a display 954, a communication interface 966, and a transceiver 968, among other components. The mobile computing device 950 may also be provided with a storage device, such as a micro-drive or other device, to provide additional storage. Each of the processor 952, the memory 964, the display 954, the communication interface 966, and the transceiver 968, are interconnected using various buses, and several of the components may be mounted on a common motherboard or in other manners as appropriate.

    [0135] The processor 952 can execute instructions within the mobile computing device 950, including instructions stored in the memory 964. The processor 952 may be implemented as a chipset of chips that include separate and multiple analog and digital processors. The processor 952 may provide, for example, for coordination of the other components of the mobile computing device 950, such as control of user interfaces, applications run by the mobile computing device 950, and wireless communication by the mobile computing device 950.

    [0136] The processor 952 may communicate with a user through a control interface 958 and a display interface 956 coupled to the display 954. The display 954 may be, for example, a TFT (Thin-Film-Transistor Liquid Crystal Display) display or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology. The display interface 956 may comprise appropriate circuitry for driving the display 954 to present graphical and other information to a user.

    [0137] The control interface 958 may receive commands from a user and convert them for submission to the processor 952. In addition, an external interface 962 may provide communication with the processor 952, so as to enable near area communication of the mobile computing device 950 with other devices. The external interface 962 may provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces may also be used.

    [0138] The memory 964 stores information within the mobile computing device 950. The memory 964 can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units. An expansion memory 974 may also be provided and connected to the mobile computing device 950 through an expansion interface 972, which may include, for example, a SIMM (Single In Line Memory Module) card interface.

    [0139] The expansion memory 974 may provide extra storage space for the mobile computing device 950, or may also store applications or other information for the mobile computing device 950. Specifically, the expansion memory 974 may include instructions to carry out or supplement the processes described above, and may include secure information also. Thus, for example, the expansion memory 974 may be provided as a security module for the mobile computing device 950, and may be programmed with instructions that permit secure use of the mobile computing device 950. In addition, secure applications may be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner.

    [0140] The memory may include, for example, flash memory and/or NVRAM memory (non-volatile random access memory), as discussed below. In some implementations, instructions are stored in an information carrier that the instructions, when executed by one or more processing devices, for example, processor 952, perform one or more methods, such as those described above. The instructions can also be stored by one or more storage devices, such as one or more computer- or machine-readable mediums, for example, the memory 964, the expansion memory 974, or memory on the processor 952. In some implementations, the instructions can be received in a propagated signal, for example, over the transceiver 968 or the external interface 962.

    [0141] The mobile computing device 950 may communicate wirelessly through the communication interface 966, which may include digital signal processing circuitry where necessary. The communication interface 966 may provide for communications under various modes or protocols, such as GSM voice calls (Global System for Mobile communications), SMS (Short Message Service), EMS (Enhanced Messaging Service), or MMS messaging (Multimedia Messaging Service), CDMA (code division multiple access), TDMA (time division multiple access), PDC (Personal Digital Cellular), WCDMA (Wideband Code Division Multiple Access), CDMA2000, or GPRS (General Packet Radio Service), among others. Such communication may occur, for example, through the transceiver 968 using a radio-frequency. In addition, short-range communication may occur, such as using a Bluetooth, WiFi, or other such transceiver (not shown). In addition, a GPS (Global Positioning System) receiver module 970 may provide additional navigation- and location-related wireless data to the mobile computing device 950, which may be used as appropriate by applications running on the mobile computing device 950.

    [0142] The mobile computing device 950 may also communicate audibly using an audio codec 960, which may receive spoken information from a user and convert it to usable digital information. The audio codec 960 may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of the mobile computing device 950. Such sound may include sound from voice telephone calls, may include recorded sound, e.g., voice messages, music files, etc., and may also include sound generated by applications operating on the mobile computing device 950.

    [0143] The mobile computing device 950 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a cellular telephone 980. It may also be implemented as part of a smart-phone 982, personal digital assistant, or other similar mobile device.

    [0144] Embodiments of the subject matter, the functional operations and the processes described in this specification can be implemented in digital electronic circuitry, in tangibly-embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible nonvolatile program carrier for execution by, or to control the operation of, data processing apparatus.

    [0145] Alternatively or in addition, the program instructions can be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. The computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.

    [0146] The term "data processing apparatus" encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). The apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.

    [0147] A computer program, which may also be referred to or described as a program, software, a software application, a module, a software module, a script, or code, can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and it can be deployed in any form, including as a standalone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.

    [0148] A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data, e.g., one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files, e.g., files that store one or more modules, sub programs, or portions of code. A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.

    [0149] The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).

    [0150] Computers suitable for the execution of a computer program include, by way of example, can be based on general or special purpose microprocessors or both, or any other kind of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device, e.g., a universal serial bus (USB) flash drive, to name just a few.

    [0151] Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.

    [0152] To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's client device in response to requests received from the web browser.

    [0153] Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network ("LAN") and a wide area network ("WAN"), e.g., the Internet.

    [0154] The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.

    [0155] While this specification contains many specific implementation details, these should not be construed as limitations on the scope of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.

    [0156] Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.

    [0157] Particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous. Other steps may be provided, or steps may be eliminated, from the described processes. Accordingly, other implementations are within the scope of the following claims.


    Claims

    1. A computer-implemented method executed by one or more processors, the method comprising:

    determining a set of pitch values to be modulated to transfer a data set (130) between devices (102, 104), wherein the set of pitch values are selected based on a chordal relationship between the pitch values;

    determining a symbol map (210) associating each possible data value (132, 134, 136, 138) for the data set (130) with an ordered sequence of pitch values from the set of pitch values, the symbol map (210) specifying a chord progression including a plurality of sets of pitch values and timing information indicating when each set of pitch values will be used; and

    sending the data set to one or more receiving devices (104), including for each data value (132, 134, 136, 138) in the data set (130):

    determining the ordered sequence of pitch values associated with the data value (132, 134, 136, 138) from the symbol map (210), wherein the ordered sequence is mapped, based on the timing information, onto the set of pitch values corresponding to the chord progression; and

    playing an ordered sequence of one or more sounds (110) representing the data value (132, 134, 136, 138), each sound having a pitch value in the determined ordered sequence of pitch values.


     
    2. The method of claim 1, wherein the ordered sequence of pitch values to data value mapping varies as a function of time.
     
    3. The method of claim 1, wherein each of the one or more sounds in the ordered sequence includes a plurality pitches played substantially simultaneously.
     
    4. The method of claim 1, wherein the chordal relationship is a major chord relationship, a minor chord relationship, a major seventh chord relationship, or a minor seventh chord relationship.
     
    5. The method of any of claims 1 to 4, wherein each set of pitch values are selected based on a chord relationship between the pitch values in the set.
     
    6. The method of any of claims 1 to 5, wherein the symbol map is stored by the one or more receiving devices before the data set is sent,
    and optionally wherein the one or more receiving device store a plurality of different symbol maps, and sending the data set includes sending a header including an identifier of a particular symbol map to be used when transferring the data set.
     
    7. The method of any of claims 1 to 5, wherein sending the data set includes sending a header between the sending device and the receiving devices including the symbol map.
     
    8. The method of claim 7, wherein sending the header including the symbol map includes playing an ordered sequence of sounds representing each data value in the symbol map based on a default symbol map.
     
    9. A computer-implemented method executed by one or more processors, the method comprising:

    identifying a symbol map (210) associating each possible data value (132, 134, 136, 138) for a data set (130) with an ordered sequence of pitch values from a set of pitch values selected based on a chordal relationship between the pitch values, the symbol map (210) specifying a chord progression including a plurality of sets of pitch values and timing information indicating when each set of pitch values will be used;

    receiving a plurality of sounds (110) from a sending device (102);

    identifying ordered sequences of the received sounds having pitch values associated with data values in the symbol map (210) by considering occurrences of pitch values from the set of pitch values corresponding to the chord progression; and

    assembling the data values (132, 134, 136, 138) according to an order in which the identified sequences were received to form the data set (130).


     
    10. The method of claim 9, wherein the chordal relationship is a major chord relationship, a minor chord relationship, a major seventh chord relationship, or a minor seventh chord relationship.
     
    11. The method of either one of claims 9 or 10, wherein each set of pitch values are selected based on a chord relationship between the pitch values in the set.
     
    12. The method of any of claims 9 to 11, wherein identifying the symbol map includes receiving a header from the sending device including an identifier of a particular symbol map to be used when transferring the data set, or
    wherein identifying the symbol map includes receiving a header from the sending device including the symbol map.
     
    13. A system comprising:

    memory (904) for storing data; and

    one or more processors (902) operable to perform operations comprising:

    determining a set of pitch values to be modulated to transfer a data set (130) between devices (102, 104), wherein the set of pitch values are selected based on a chordal relationship between the pitch values;

    determining a symbol map (210) associating each possible data value (132, 134, 136, 138) for the data set (130) with an ordered sequence of pitch values from the set of pitch values, the symbol map (210) specifying a chord progression including a plurality of sets of pitch values and timing information indicating when each set of pitch values will be used; and

    sending the data set (130) to one or more receiving devices (102, 104), including for each data value (132, 134, 136, 138) in the data set (130):

    determining the ordered sequence of pitch values associated with the data value (132, 134, 136, 138) from the symbol map (210), wherein the ordered sequence is mapped, based on the timing information, onto the set of pitch values corresponding to the chord progression; and

    playing an ordered sequence of sounds representing the data value (132, 134, 136, 138), each sound having a pitch value in the determined ordered sequence of pitch values.


     


    Ansprüche

    1. Computerimplementiertes Verfahren, das von einem oder mehreren Prozessoren ausgeführt wird, das Verfahren umfassend:

    Ermitteln eines Satzes von zu modulierenden Tonhöhenwerten zur Übertragung eines Datensatzes (130) zwischen Geräten (102, 104), wobei der Satz von Tonhöhenwerten basierend auf einer Akkordbeziehung zwischen den Tonhöhenwerten ausgewählt wird;

    Ermitteln einer Symbolzuordnung (210), die jeden möglichen Datenwert (132, 134, 136, 138) für den Datensatz (130) mit einer geordneten Folge von Tonhöhenwerten aus dem Satz von Tonhöhenwerten assoziiert, wobei die Symbolzuordnung (210) einen Akkordverlauf angibt, der eine Vielzahl von Sätzen von Tonhöhenwerten und Zeitinformation, die anzeigen, wann jeder Satz von Tonhöhenwerten verwendet wird, beinhaltet; und

    Senden des Datensatzes an ein oder mehrere Empfangsgeräte (104), beinhaltend für jeden Datenwert (132, 134, 136, 138) in dem Datensatz (130):

    Ermitteln der geordneten Folge von Tonhöhenwerten, die mit dem Datenwert (132, 134, 136, 138) aus der Symbolzuordnung (210) assoziiert sind, wobei die geordnete Folge dem Satz von Tonhöhenwerten entsprechend dem Akkordverlauf basierend auf den Zeitinformationen zugeordnet wird; und

    Abspielen einer geordneten Folge eines oder mehrerer Töne (110), die den Datenwert (132, 134, 136, 138) darstellen, wobei jeder Ton einen Tonhöhenwert in der ermittelten geordneten Folge von Tonhöhenwerten aufweist.


     
    2. Verfahren nach Anspruch 1, wobei die geordnete Folge von Tonhöhenwerten zur Datenwertzuordnung in Abhängigkeit von Zeit variiert.
     
    3. Verfahren nach Anspruch 1, wobei jeder des einen oder der mehreren Töne in der geordneten Folge eine Vielzahl von Tonhöhen beinhaltet, die im Wesentlichen gleichzeitig abgespielt werden.
     
    4. Verfahren nach Anspruch 1, wobei die Akkordbeziehung eine Durakkordbeziehung, eine Mollakkordbeziehung, eine Durseptakkordbeziehung oder eine Mollseptakkordbeziehung ist.
     
    5. Verfahren nach einem der Ansprüche 1 bis 4, wobei jeder Satz von Tonhöhenwerten basierend auf einer Akkordbeziehung zwischen den Tonhöhenwerten in dem Satz ausgewählt wird.
     
    6. Verfahren nach einem der Ansprüche 1 bis 5, wobei die Symbolzuordnung von dem einen oder den mehreren Empfangsgeräten gespeichert wird, bevor der Datensatz gesendet wird,
    und wobei, optional, das eine oder die mehreren Empfangsgeräte eine Vielzahl von unterschiedlichen Symbolzuordnungen speichern, und das Senden des Datensatzes ein Senden eines Headers beinhaltet, der eine Kennung einer bestimmten beim Übertragen des Datensatzes zu verwendenden Symbolzuordnung beinhaltet.
     
    7. Verfahren nach einem der Ansprüche 1 bis 5, wobei das Senden des Datensatzes ein Senden eines Headers zwischen dem Sendegerät und den Empfangsgeräten beinhaltet, der die Symbolzuordnung beinhaltet.
     
    8. Verfahren nach Anspruch 7, wobei das Senden des Headers, der die Symbolzuordnung beinhaltet, das Abspielen einer geordneten Folge von Tönen, die jeden Datenwert in der Symbolzuordnung darstellen, basierend auf einer Standardsymbolzuordnung, beinhaltet.
     
    9. Computerimplementiertes Verfahren, das von einem oder mehreren Prozessoren ausgeführt wird, das Verfahren umfassend:

    Identifizieren einer Symbolzuordnung (210), die jeden möglichen Datenwert (132, 134, 136, 138) für einen Datensatz (130) mit einer geordneten Folge von Tonhöhenwerten aus einem Satz von Tonhöhenwerten assoziiert, die basierend auf einer Akkordbeziehung zwischen den Tonhöhenwerten ausgewählt werden, wobei die Symbolzuordnung (210) einen Akkordverlauf angibt, der eine Vielzahl von Sätzen von Tonhöhenwerten und Zeitinformation, die anzeigen, wann jeder Satz von Tonhöhenwerten verwendet wird, beinhaltet;

    Empfangen einer Vielzahl von Tönen (110) von einem Sendegerät (102);

    Identifizieren geordneter Folgen der empfangenen Töne, die Tonhöhenwerte aufweisen, die mit den Datenwerten in der Symbolzuordnung (210) assoziiert sind, durch Berücksichtigen des Auftretens von Tonhöhenwerten aus dem Satz von Tonhöhenwerten, die dem Akkordverlauf entsprechen; und

    Zusammenstellen der Datenwerte (132, 134, 136, 138) gemäß einer Reihenfolge, in der die identifizierten Folgen empfangen wurden, um den Datensatz (130) zu bilden.


     
    10. Verfahren nach Anspruch 9, wobei die Akkordbeziehung eine Durakkordbeziehung, eine Mollakkordbeziehung, eine Durseptakkordbeziehung oder eine Mollseptakkordbeziehung ist.
     
    11. Verfahren nach einem der Ansprüche 9 oder 10, wobei jeder Satz von Tonhöhenwerten basierend auf einer Akkordbeziehung zwischen den Tonhöhenwerten in dem Satz ausgewählt wird.
     
    12. Verfahren nach einem der Ansprüche 9 bis 11, wobei das Identifizieren der Symbolzuordnung das Empfangen eines Headers von dem Sendegerät beinhaltet, der eine Kennung einer bestimmten beim Übertragen des Datensatzes zu verwendenden Symbolzuordnung beinhaltet, oder
    wobei das Identifizieren der Symbolzuordnung das Empfangen eines Headers von dem Sendegerät beinhaltet, der die Symbolzuordnung beinhaltet.
     
    13. System, umfassend:

    Speicher (904) zum Speichern von Daten; und

    einen oder mehrere Prozessoren (902), die in der Lage sind, Operationen durchzuführen, umfassend:

    Ermitteln eines Satzes von zu modulierenden Tonhöhenwerten zur Übertragung eines Datensatzes (130) zwischen Geräten (102, 104), wobei der Satz von Tonhöhenwerten basierend auf einer Akkordbeziehung zwischen den Tonhöhenwerten ausgewählt wird;

    Ermitteln einer Symbolzuordnung (210), die jeden möglichen Datenwert (132, 134, 136, 138) für den Datensatz (130) mit einer geordneten Folge von Tonhöhenwerten aus dem Satz von Tonhöhenwerten assoziiert, wobei die Symbolzuordnung (210) einen Akkordverlauf angibt, der eine Vielzahl von Sätzen von Tonhöhenwerten und Zeitinformation, die anzeigen, wann jeder Satz von Tonhöhenwerten verwendet wird, beinhaltet; und

    Senden des Datensatzes (130) an ein oder mehrere Empfangsgeräte (102, 104), beinhaltend für jeden Datenwert (132, 134, 136, 138) in dem Datensatz (130):

    Ermitteln der geordneten Folge von Tonhöhenwerten, die mit dem Datenwert (132, 134, 136, 138) aus der Symbolzuordnung (210) assoziiert sind, wobei die geordnete Folge dem Satz von Tonhöhenwerten entsprechend dem Akkordverlauf basierend auf den Zeitinformationen zugeordnet wird; und

    Abspielen einer geordneten Folge von Tönen, die den Datenwert (132, 134, 136, 138) darstellen, wobei jeder Ton einen Tonhöhenwert in der ermittelten geordneten Folge von Tonhöhenwerten aufweist.


     


    Revendications

    1. Procédé mis en œuvre par ordinateur exécuté par un ou plusieurs processeurs, le procédé comprenant :

    la détermination d'un ensemble de valeurs de hauteur tonale à moduler pour transférer un ensemble de données (130) entre des dispositifs (102, 104), dans lequel l'ensemble de valeurs de hauteur tonale est sélectionné sur base d'une relation harmonique entre les valeurs de hauteur tonale ;

    la détermination d'une carte de symboles (210) associant chaque valeur de données possible (132, 134, 136, 138) pour l'ensemble de données (130) à une séquence ordonnée de valeurs de hauteur tonale à partir de l'ensemble de valeurs de hauteur tonale, la carte de symboles (210) spécifiant une progression harmonique comprenant une pluralité d'ensembles de valeurs de hauteur tonale et des informations de synchronisation indiquant quand chaque ensemble de valeurs de hauteur tonale sera utilisé; et

    l'envoi de l'ensemble de données vers un ou plusieurs dispositifs de réception (104) comprenant pour chaque valeur de données (132, 134, 136, 138) dans l'ensemble de données (130) :

    la détermination de la séquence ordonnée de valeurs de hauteur tonale associées à la valeur de données (132, 134, 136, 138) à partir de la carte de symboles (210), dans laquelle la séquence ordonnée est cartographiée, sur base des informations de synchronisation, sur l'ensemble de valeurs de hauteur tonale correspondant à la progression harmonique ; et

    la lecture d'une séquence ordonnée d'un ou plusieurs sons (110) représentant la valeur de données (132, 134, 136, 138), chaque son ayant une valeur de hauteur tonale dans la séquence ordonnée déterminée des valeurs de hauteur tonale.


     
    2. Procédé selon la revendication 1, dans lequel la séquence ordonnée des valeurs de hauteur tonale vers la cartographie de valeur de données varie en fonction du temps.
     
    3. Procédé selon la revendication 1, dans lequel chacun du ou des sons dans la séquence ordonnée comprend une pluralité de hauteurs tonales diffusés sensiblement simultanément.
     
    4. Procédé selon la revendication 1, dans lequel la relation harmonique est une relation harmonique majeure, une relation harmonique mineure, une relation harmonique de septième majeure ou une relation harmonique de septième mineure.
     
    5. Procédé selon l'une quelconque des revendications 1 à 4, dans lequel chaque ensemble de valeurs de hauteur tonale est sélectionné sur base d'une relation harmonique entre les valeurs de hauteur tonale dans l'ensemble.
     
    6. Procédé selon l'une quelconque des revendications 1 à 5, dans lequel la carte de symboles est stockée par le ou les dispositifs de réception avant l'envoi de l'ensemble de données.
    et éventuellement dans lequel le ou les dispositifs de réception stockent une pluralité de cartes de symboles différentes, et l'envoi de l'ensemble de données comprend l'envoi d'un en-tête comprenant un identifiant d'une carte de symboles particulière à utiliser lors du transfert de l'ensemble de données.
     
    7. Procédé selon l'une quelconque des revendications 1 à 5, dans lequel l'envoi de l'ensemble de données comprend l'envoi d'un en-tête entre le dispositif d'envoi et les dispositifs de réception comprenant la carte de symboles.
     
    8. Procédé selon la revendication 7, dans lequel l'envoi de l'en-tête comprenant la carte de symboles, comprend la diffusion d'une séquence ordonnée de sons représentant chaque valeur de données dans la carte de symboles sur base d'une carte de symboles par défaut.
     
    9. Procédé mis en œuvre par ordinateur exécuté par un ou plusieurs processeurs, le procédé comprenant :

    l'identification d'une carte de symboles (210) associant chaque valeur de données possible (132, 134, 136, 138) pour un ensemble de données (130) avec une séquence ordonnée de valeurs de hauteur tonale à partir d'un ensemble de valeurs de hauteur tonale sélectionnées sur base d'une relation harmonique entre les valeurs de hauteur tonale, la carte de symboles (210) spécifiant une progression harmonique comprenant une pluralité d'ensembles de valeurs de hauteur tonale et d'informations de synchronisation indiquant quand chaque ensemble de valeurs de hauteur tonale sera utilisé ;

    la réception d'une pluralité de sons (110) à partir d'un dispositif d'envoi (102) ;

    l'identification de séquences ordonnées des sons reçus ayant des valeurs de hauteur tonale associées aux valeurs de données dans la carte de symboles (210) par la prise en compte des occurrences de valeurs de hauteur tonale à partir de l'ensemble de valeurs de hauteur tonale correspondant à la progression harmonique ; et

    l'assemblage des valeurs de données (132, 134, 136, 138) selon un ordre dans lequel les séquences identifiées ont été reçues pour former l'ensemble de données (130).


     
    10. Procédé selon la revendication 9, dans lequel la relation harmonique est une relation harmonique majeure, une relation harmonique mineure, une relation harmonique de septième majeure ou une relation harmonique de septième mineure.
     
    11. Procédé selon l'une quelconque des revendications 9 ou 10, dans lequel chaque ensemble de valeurs de hauteur tonale est sélectionné sur base d'une relation harmonique entre les valeurs de hauteur tonale dans l'ensemble.
     
    12. Procédé selon l'une quelconque des revendications 9 à 11, dans lequel l'identification de la carte de symboles comprend la réception d'un en-tête à partir du dispositif d'envoi comprenant un identifiant d'une carte de symboles particulière à utiliser lors du transfert de l'ensemble de données, ou
    dans lequel l'identification de la carte de symboles comprend la réception d'un en-tête à partir du dispositif d'envoi comprenant la carte de symboles.
     
    13. Système comprenant :

    une mémoire (904) pour le stockage des données ; et

    un ou plusieurs processeurs (902) utilisables pour effectuer des opérations comprenant :

    la détermination d'un ensemble de valeurs de hauteur tonale à moduler pour transférer un ensemble de données (130) entre des dispositifs (102, 104), dans lequel l'ensemble de valeurs de hauteur tonale est sélectionné sur base d'une relation harmonique entre les valeurs de hauteur tonale ;

    la détermination d'une carte de symboles (210) associant chaque valeur de données possible (132, 134, 136, 138) pour l'ensemble de données (130) à une séquence ordonnée de valeurs de hauteur tonale à partir de l'ensemble de valeurs de hauteur tonale, la carte de symboles (210) spécifiant une progression harmonique comprenant une pluralité d'ensembles de valeurs de hauteur tonale et des informations de synchronisation indiquant quand chaque ensemble de valeurs de hauteur tonale sera utilisé; et

    l'envoi de l'ensemble de données (130) à un ou plusieurs dispositifs de réception (102, 104), comprenant pour chaque valeur de données (132, 134, 136, 138) dans l'ensemble de données (130) :

    la détermination de la séquence ordonnée de valeurs de hauteur tonale associées à la valeur de données (132, 134, 136, 138) à partir de la carte de symboles (210), dans laquelle la séquence ordonnée est cartographiée, sur base des informations de synchronisation, sur l'ensemble de valeurs de hauteur tonale correspondant à la progression harmonique ; et

    la lecture d'une séquence ordonnée de sons représentant la valeur de données (132, 134, 136, 138), chaque son ayant une valeur de hauteur tonale dans la séquence ordonnée déterminée de valeurs de hauteur tonale.


     




    Drawing






































    Cited references

    REFERENCES CITED IN THE DESCRIPTION



    This list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.

    Patent documents cited in the description