(19)
(11)EP 3 588 282 A3

(12)EUROPEAN PATENT APPLICATION

(88)Date of publication A3:
29.04.2020 Bulletin 2020/18

(43)Date of publication A2:
01.01.2020 Bulletin 2020/01

(21)Application number: 19177465.2

(22)Date of filing:  29.05.2019
(51)International Patent Classification (IPC): 
G06F 9/30(2018.01)
(84)Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR
Designated Extension States:
BA ME
Designated Validation States:
KH MA MD TN

(30)Priority: 30.06.2018 US 201816024812

(71)Applicant: INTEL Corporation
Santa Clara, CA 95054 (US)

(72)Inventors:
  • NAIR, Krishnakumar
    San Jose, California 95134 (US)
  • YANG, Andrew
    Cupertino, California 95014 (US)
  • ROTZIN, Michael
    Santa Clara, California 95051 (US)
  • GAREGRAT, Nitin
    Chandler, Arizona 85226 (US)
  • SCHEBYE, Tom
    San Carlos, California 94070 (US)
  • WERNER, Tony
    Los Altos, California 94022 (US)

(74)Representative: Samson & Partner Patentanwälte mbB 
Widenmayerstraße 6
80538 München
80538 München (DE)

  


(54)APPARATUS AND METHOD FOR COHERENT, ACCELERATED CONVERSION BETWEEN DATA REPRESENTATIONS


(57) An apparatus and method for a converting tensor data. For example, one embodiment of a method comprises: fetching source tensor blocks of a source tensor data structure, each source tensor block comprising a plurality of source tensor data elements having a first numeric representation, wherein the source tensor data structure comprises a predefined structural arrangement of source tensor blocks; converting the one or more source tensor blocks into one or more destination tensor blocks comprising a plurality of destination tensor data elements having a second numeric representation different from the first numeric representation, wherein the sets of one or more source tensor blocks are converted to one or more corresponding destination tensor blocks in a specified order based on the first and second numeric representations; and storing each individual destination tensor block in a designated memory region to maintain coherency with the predefined structural arrangement of the source tensor blocks.







Search report









Search report