<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE ep-patent-document PUBLIC "-//EPO//EP PATENT DOCUMENT 1.1//EN" "ep-patent-document-v1-1.dtd">
<ep-patent-document id="EP93308849B1" file="EP93308849NWB1.xml" lang="en" country="EP" doc-number="0597637" kind="B1" date-publ="20000823" status="n" dtd-version="ep-patent-document-v1-1">
<SDOBI lang="en"><B000><eptags><B001EP>......DE....FRGB........NL........................</B001EP><B005EP>R</B005EP><B007EP>DIM360   - Ver 2.9 (30 Jun 1998)
 2100000/0</B007EP></eptags></B000><B100><B110>0597637</B110><B120><B121>EUROPEAN PATENT SPECIFICATION</B121></B120><B130>B1</B130><B140><date>20000823</date></B140><B190>EP</B190></B100><B200><B210>93308849.4</B210><B220><date>19931105</date></B220><B240><B241><date>19940908</date></B241><B242><date>19961002</date></B242></B240><B250>en</B250><B251EP>en</B251EP><B260>en</B260></B200><B300><B310>975197</B310><B320><date>19921112</date></B320><B330><ctry>US</ctry></B330></B300><B400><B405><date>20000823</date><bnum>200034</bnum></B405><B430><date>19940518</date><bnum>199420</bnum></B430><B450><date>20000823</date><bnum>200034</bnum></B450><B451EP><date>19991015</date></B451EP></B400><B500><B510><B516>7</B516><B511> 7H 01L  21/00   A</B511></B510><B540><B541>de</B541><B542>System und Verfahren für automatische Positionierung eines Substrats in einem Prozessraum</B542><B541>en</B541><B542>System and method for automated positioning of a substrate in a processing chamber</B542><B541>fr</B541><B542>Système et méthode de positionnement automatique d'un substrat dans une chambre de traitement</B542></B540><B560><B561><text>EP-A- 0 313 466</text></B561><B561><text>EP-A- 0 508 748</text></B561><B561><text>GB-A- 2 180 097</text></B561></B560><B590><B598>1   2</B598></B590></B500><B700><B720><B721><snm>Shmookler, Simon</snm><adr><str>415 Colon Avenue</str><city>San Francisco,
California 94127</city><ctry>US</ctry></adr></B721><B721><snm>Weinberg, Andrew G.</snm><adr><str>1954 Foxworthy Avenue</str><city>San Jose,
California 95124</city><ctry>US</ctry></adr></B721><B721><snm>McGrath, Martin J.</snm><adr><str>1440 Ramon Drive</str><city>Sunnyvale,
California 94087</city><ctry>US</ctry></adr></B721></B720><B730><B731><snm>APPLIED MATERIALS, INC.</snm><iid>00511373</iid><irf>41106000/EA4985</irf><adr><str>3050 Bowers Avenue,
M/S 2061</str><city>Santa Clara,
California 95054-3299</city><ctry>US</ctry></adr></B731></B730><B740><B741><snm>Bayliss, Geoffrey Cyril</snm><sfx>et al</sfx><iid>00028151</iid><adr><str>BOULT WADE TENNANT,
Verulam Gardens
70 Gray's Inn Road</str><city>London WC1X 8BT</city><ctry>GB</ctry></adr></B741></B740></B700><B800><B840><ctry>DE</ctry><ctry>FR</ctry><ctry>GB</ctry><ctry>NL</ctry></B840></B800></SDOBI><!-- EPO <DP n="1"> -->
<description id="desc" lang="en">
<p id="p0001" num="0001">This invention relates generally to an improved position control means for robotic handling systems and more particularly, to an improved system and method for transferring a substrate to a predetermined position in a processing chamber.</p>
<p id="p0002" num="0002">In the manufacture of integrated circuits, semiconductor substrates are loaded into various reaction and other chambers using automated equipment for processing. Equipment has been designed including a robot that can transfer a semiconductor substrate, such as a silicon wafer, from a cassette through a central transfer chamber and into one or more processing chambers about and connected to the transfer chamber, in which the robot is located. It is desirable to know the exact location of the semiconductor substrate relative to the processing chamber so that the substrate can be precisely positioned at an optimum location within the apparatus to maximize the effectiveness of the processing onto the precise desired surface area of the substrate to be processed. Likewise, it is also desirable that the substrate positioning apparatus that is used<!-- EPO <DP n="2"> --> as a reference point and upon which the substrate is transported be routinely calibrated so that positioning error is minimized, if not eliminated.</p>
<p id="p0003" num="0003">Currently, there are few known different methods and systems for locating the centerpoint of semiconductor substrates; they include those disclosed in: US-A-4,833,790, entitled METHOD AND SYSTEM FOR LOCATING AND POSITIONING CIRCULAR WORKPIECES, and, EP-A-0288233, entitled SYSTEM AND METHOD FOR DETECTING THE CENTER OF AN INTEGRATED CIRCUIT WAFER.</p>
<p id="p0004" num="0004">In US-A-4,833,790, the method and system disclosed is of a "spindle" type whereby a water is transferred by shuttle to a spindle where it is incrementally rotated, the distances between the center of rotation to the periphery of the wafer is measured along a linear path by a sensor means, the wafer centerpoint offset is calculated by geometric analysis of the measurements, and the wafer centered on the spindle by the shuttle.</p>
<p id="p0005" num="0005">There are several disadvantages with the "spindle" type method and system. First, it is an entirely separate and distinct apparatus from the processing system. Having a separate centerfinding apparatus requires an additional step in the manufacturing process, adding cost and complexity and<!-- EPO <DP n="3"> --> decreasing valuable throughput time. That is, the wafer cannot be directly unloaded by robot from the wafer storage cassette and transferred to a processing chamber without first being manipulated by the separate centerfinding apparatus. As a result, the "spindle" type system and method does not take advantage of the direct movement of the wafer as it is transferred from the wafer storage cassette to the processing chamber. In addition, the "spindle" type system shuttle may require periodic calibration by a separate calibration tool if the centerfinding method is to remain accurate. Furthermore, once the positioning method has been performed, the wafer is transferred to a separate wafer transport arm which may also require periodic calibration to maintain precision positioning of the wafer.</p>
<p id="p0006" num="0006">In EP-A-0288233, the system and method disclosed is of an "optical sensor array" type whereby a semiconductor water is moved along a linear path across an array of sensors positioned generally transverse to the linear path of the wafer support blade. This "centerfinder" method is performed upon the direct removal of the wafer from a storage cassette by a processing system robot and while en route to a processing chamber. The robot blade and peripheral edges of the wafer are detected separately by the optical sensors to calculate the coordinate center position of the wafer relative to the robot blade. An xy coordinate system is defined by the path (x) of movement of the<!-- EPO <DP n="4"> --> robot arm/blade and the center line (y) of the optical sensors. The origin (0) of the y coordinate axis is defined by the position of the center sensor. The detection of the robot blade provides a reference point and origin (0,0) of the xy coordinate system from which to move the wafer to its destination point. The detection of points along the leading and trailing edges of the wafer provide points on the x axis generally parallel to the path of movement of the wafer and from which the centerpoint of the wafer can be geometrically determined. Once the wafer center position is geometrically determined, the wafer can be moved and positioned at the destination location.</p>
<p id="p0007" num="0007">The "centerfinding" system of EP-A-0288233 overcomes the disadvantages of the system of US-A-4,833,790 in that a separate and distinct apparatus is not required to determine the centerpoint of the wafer. The centerpoint of the wafer is determined directly during movement of the wafer to its destination location. This is especially advantageous in a wafer processing system configuration where there exists a robot of a R-Theta type in a multiple chamber processing apparatus with a single loadlock chamber as shown in EP-A-0288233.</p>
<p id="p0008" num="0008">However, there are disadvantages to the "centerfinding" system. The first and foremost disadvantage is that the wafer must pass over the sensors in a linear path transverse to the position of the sensors which are positioned adjacent to the loadlock chamber. This means that the "centerfinding" operation<!-- EPO <DP n="5"> --> can only take place when a wafer is being loaded or unloaded from the loadlock chamber adjacent to the position of the sensors. This is a distinct disadvantage when the processing system has multiple loadlock as well as multiple processing chambers. Each time a water is transported from one chamber to another, it must first be transferred back to the loadlock chamber adjacent the sensors so that the wafer can be passed through the sensors in the linear fashion shown in EP-A-0288233 for the centerfinding method to be performed. As a result, the "centerfinder "system is ill-suited to the multiple chamber wafer processing system configuration in that it causes a decrease in valuable throughout time. If multiple sensor arrays are used, for example, if a sensor array is positioned adjacent to each loadlock and processing chamber, the increase in complexity and cost would render the configuration impractical. Furthermore, the algorithms used to geometrically determine the centerpoint of the wafer cannot be easily extended to include more than three sensors.</p>
<p id="p0009" num="0009">The second disadvantage of the "centerfinder" system is that once the wafer centerpoint is determined, the positioning of the water is relative to a reference point taken from the robot blade. This is a disadvantage in that the reference point is previously calibrated relative to the position of a "golden" wafer, hand centered on the blade. This manual intervention for calibrating the robot blade increases<!-- EPO <DP n="6"> --> the chance of calibration error. In addition, the golden wafer used may have an undetected defect, or not be perfectly round, thereby increasing calibration error. Lastly, manual intervention using a separate and distance calibration tool such as a golden wafer is cumbersome and inefficient.</p>
<p id="p0010" num="0010">A third disadvantage of the "centerfinder" is that the sensor electronics are disposed within the system transfer chamber. The sensor electronics must be placed within the transfer chamber to reduce sensor error in the detection of the wafer. Being thus disposed, the components of the electronics can outgas, thereby contaminating the waters. This is a further reason why a plurality of sensor arrays are impractical with this type of centerfinder system. The greater the number of sensor arrays, the greater the likelihood of contamination.</p>
<p id="p0011" num="0011">An additional disadvantage of the "centerfinder" is that the robot arm must be in an extended position for the wafer to be detected by the sensors. The extended stroke of the robot increases any calibration error and reduces the accuracy of the centerfinder operation.</p>
<p id="p0012" num="0012">Thus, a need has arisen for an improved system and method for determining the location of a semiconductor substrate with respect to a semiconductor substrate support which increases throughput time in a multiple loadlock and processing chamber system, includes sensor electronics disposed outside the<!-- EPO <DP n="7"> --> transfer chamber for reduction of contaminates, is easily extendible to accommodate a plurality of sensor arrays, and wherein the location operation occurs when the robot arm is in a retracted position to minimize calibration error, if any. In addition, there is a need for an improved positioning system which can be calibrated without manual intervention or need for a separate and distinct calibration tool.</p>
<p id="p0013" num="0013">This invention provides a system for accurately positioning an object at a preselected location comprising object transfer means including a support moveable in a predetermined arcuate path about a fixed axis and along linear paths extending radially of the fixed axis around said arcuate path for carrying said object between a first location and a preselected second location, the position of said moveable support being known at all times, and the position of said object relative to said moveable support initially being unknown, an array of optical sensors including at least two sensors disposed between said locations, said sensors being operative to detect a plurality of points on the perimeter of the object as it is carried by the moveable support between said locations to generate signals from which the position of said object relative to the known position of said moveable support can be determined, and means responsive to the signals and operative to position the object at said preselected location; characterised in that at least two sensors of said array of sensors are disposed along an axis extending generally transverse to said arcuate path of the moveable support to detect said perimeter points of the objects as it moves through said arcuate path.<!-- EPO <DP n="8"> --></p>
<p id="p0014" num="0014">The invention also provides an object processing apparatus having an object positioning system as set out above wherein the apparatus has a central transfer chamber, a plurality of peripheral chambers positioned around the periphery of said central transfer chamber, said moveable object support being moveable within said central transfer chamber along said arcuate path between said peripheral chambers to load, move and unload said object to and from said peripheral chambers, the object positioning system further comprising means for providing object support reference signals indicative of the position of an object support reference point and said sensors being triggered by the leading and trailing edges of the moving object as it passes therethrough to develop corresponding object position signals from which an object position reference point can be determined, and means being provided responsive to the object support reference signals and the object position signals operative to calculate the location of the object position relative to the object support, and further operative to move said object support to a corresponding offset position relative to said preselected location so as to position said object at said preselected location in one of the peripheral chambers.</p>
<p id="p0015" num="0015">The invention further provides a method for accurately positioning an object having a centerpoint from a first location to a known preselected second location by detecting the relative position of the object with respect to a moveable object support upon which said object is supported and which is moveable in a predetermined arcuate path about a fixed axis and along linear paths<!-- EPO <DP n="9"> --> extending radially of the axis around the arcuate path between said first and second locations, the position of said moveable object support at all times being known, and the position of said object relative to said moveable object support initially being unknown; the method comprising the steps of providing an array of sensors including at least two sensors mounted along an axis generally transverse to the path, detecting perimeter points along the perimeter of the object by moving the moveable object support along the path, thereby triggering the sensors to generate object signals from which the position of said object can be determined relative to the known position of said moveable object support, calculating the object position relative to the known position of the moveable object support from the object signals, and moving said moveable object support and said object supported thereon to the second preselected location so that said object is coincident with said selected location; wherein the object support is rotated about said fixed axis to move the object through said arcuate path during movement from said first to said second location, and in that said sensors are mounted along an axis generally transverse to the arcuate path to detect said perimeter points of the object as it moves through the arcuate path.</p>
<p id="p0016" num="0016">The following is a description of some specific embodiments of the invention, reference being made to the accompanying drawings, in which:
<ul id="ul0001" list-style="none" compact="compact">
<li>Fig. 1 is a schematic diagram of the system assembly of the invention;</li>
<li>Fig. 2 is a cross-sectional view of a<!-- EPO <DP n="10"> --> semiconductor processing transfer chamber taken along the line 2-2 of Fig 1;</li>
<li>Figs. 3a-3d are plan views of the transfer chamber shown in Fig. 1 illustrating the sequential movement of a robot and substrate from a first chamber to a second chamber and intersection of the sensor array to implement the substrate centerfinder aspect of the invention;</li>
<li>Fig. 4 is a diagrammatical depiction of the geometric relationship of the six sensor trigger points and the robot center of rotation reference point used in the determination of the coordinate locations of the substrate trigger points;</li>
<li>Fig. 5 is a diagrammatical depiction of the geometric relationship between three trigger points and the center of the substrate used in the determination of the centerpoint of the substrate;</li>
<li>Fig. 6a is a diagram representing wave forms of the change in states of the photoelectric sensors and corresponding individual and combined interrupt signals;</li>
<li>Fig. 6b is a simplified circuit diagram for the ordered processing of sensor signals;<!-- EPO <DP n="11"> --></li>
<li>Fig. 7 is a flow chart of the computer program centerfinder routine used to perform the centerfinding method of this invention;</li>
<li>Fig. 8 is a partial plan view of the system robot and support blade in retracted position over the sensor array showing the relationship of the sensor beams to the support blade;</li>
<li>Fig. 9 is a sequential plan view of the system robot including a transfer blade showing the movement of the blade from a retracted position to an extended position (split view) over a selected sensor required for the calibration of the robot blade of this invention;</li>
<li>Fig. 10 is a diagrammatic representation of the robot extension equation illustrating the non-linear curve equating extension motor input to extension distance; and</li>
<li>Fig. 11 is a computer program flow chart depicting the calibration routine of this invention.</li>
</ul></p>
<heading id="h0001"><b>1. Overview of The System Assembly</b></heading>
<p id="p0017" num="0017">Although the following description describes the invention in terms of a semiconductor wafer, this is for illustration purposes only and other substrates or objects to be transferred to a preselected location can be substituted therefor, as will be known to one skilled in the art.<!-- EPO <DP n="12"> --></p>
<p id="p0018" num="0018">Fig. 1 of the accompanying drawing is a diagrammatic representation of a multi-chamber, semiconductor wafer processing system including a wafer robot centerfinding and calibration system and method in accordance with the present invention. The system shown generally at 1 includes a transfer chamber 2 having at least two integrally attached wafer receiving chambers 3a (e.g., a loadlock chamber) and 3b (e.g., a wafer processing chamber) having centrally located, optimal position points 65a and 65b respectively. During processing, the wafer should be centered over these positions 65a and 65b. A wafer transfer means such as the R-theta robot 4 having a wafer support blade 4a, is shown with a semiconductor wafer 5 in transfer position thereon. A photoelectric sensor array 6, preferably including four sensors connected to a sensor electronics assembly 8 (shown separated by way of example from sensor array 6 for the convenience of component illustration) and having a power supply 9, the assembly 8 being connected to a sensor interface card 11 receivingly engaged within a processing means 12. The processing means also includes a robot motor control interface card 13 connected to a robot motor interface (terminal) board 15 having connected to it a motor driver 16a connected to a first robot motor 17a (Fig. 2), and a second motor driver 16b connected to a second robot motor 17b (Fig. 2). The terminal board 15 is also connected to robot motor encoders 18a and 18b. Processing means 12 also includes<!-- EPO <DP n="13"> --> a digital input/output card 19 which is connected to a robot "home" sensor 19a (not shown).</p>
<p id="p0019" num="0019">Sensor array 6 is preferably positioned between a pair of chambers 3a and 3b such that the centerfinding method of this invention, more fully described hereinbelow, can be performed during normal wafer transfer operation. A four sensor array is preferred for increased redundancy and accuracy. Using a four sensor array is particularly advantageous when processing silicon wafers with multiple flats as the substrate. However, the centerfinding method can be performed with as few as two sensors.</p>
<p id="p0020" num="0020">While only a single array is shown for purposes of illustration of the invention, an alternative configuration can include multiple arrays respectively positioned between other chamber pairs. The mathematical algorithms used in the determination of the centerpoint of a wafer as disclosed herein are easily extendable by one skilled in the art to accommodate as many sensors and arrays as are desired.</p>
<p id="p0021" num="0021">By way of example, the system robot 4 may consist of an Applied Materials, Inc., 5500 or 5200 R-Theta robot; the photoelectric sensors of array 6 may consist of Banner Engineering, Inc., model nos. SM31RL and SM31-EL (pair); the sensor interface card 11 may consist of an IBM®/PC compatible sensor interface card; the processing means 12 may consist of an IBM® compatible personal computer; the motor control interface<!-- EPO <DP n="14"> --> card 13 may consist of a Metrabyte counter/encoder board model no. 5312; and the digital input/output card 19 may consist of an Oregon Micro Systems stepper controller board model no. PCX-4E. A system assembly control computer program written in C language is used to control the centerfinding and calibration operations of the invention, the logic of which is shown in Figs. 7 and 11 and described in greater detail hereinbelow.</p>
<p id="p0022" num="0022">Fig. 2 is a partial cross-sectional view of the transfer chamber 2 of Fig. 1 taken along the line 2-2. As shown, the chamber 2 includes the R-Theta robot 4 having dual stepper motors 17a (upper) and 17b (lower) with integral motor encoders 18a and 18b, a wafer support blade 4a and robot arms 4b. Also shown in Fig. 2 is a photoelectric sensor array 6 comprising four receivers 6a, 6b, 6c and 6d and four emitters 20a, 20b, 20c and 20d, and associated electronics shown generally at 8a and 8b. As was illustrated in Fig. 1, the array of sensor emitter-receiver pairs is positioned to extend along a line transverse to the arcuate path 66 followed by the wafer 5 as it is moved between charters. Emitters 20a-d and receivers 6a-d are disposed outside of the transfer chamber 2 to eliminate contamination as a result of any outgasing associated with the sensor electronics. The sensing light beams 30-33 pass into and out of the transfer chamber 2 through quartz windows 7 disposed in the upper and lower walls 2a.<!-- EPO <DP n="15"> --></p>
<heading id="h0002"><b>2. Overview of The Centerfinding Method</b></heading>
<p id="p0023" num="0023">Figs. 3a-3d illustrate the sequential movement of the system robot 4 and the wafer 5 from a first chamber 3a to a second chamber 3b while intersecting the sensor array 6 for the performance of the wafer centerfinding aspect of the invention.</p>
<p id="p0024" num="0024">Referring now to Fig. 3a, the robot 4 is shown in its fully extended position for retrieval of a wafer 5 from a chamber 3a. Once the wafer 5 is loaded onto the support blade 4a, the robot 4 retracts the support blade 4a and carries the wafer 5 from the first chamber 3a moving radially in the direction of arrow 21 to its fully retracted position as shown in Fig. 3b. Once retracted, the robot 4 rotates about its axis 4d in the direction of the arrow 22 causing the support blade 4a and the wafer 5 to sweep in an arcuate path across the sensor array 6 while en route to the second chamber 3b. Fig. 3c shows the position of the robot 4, the support blade 4a and the wafer 5 subsequent to the movement across the sensor array 6, and prior to radial extension into the second chamber 3b.</p>
<p id="p0025" num="0025">The movement of the wafer 5 across the sensor array 6 permits the detection of coordinate points along the leading and trailing edges of the wafer 5 so that the centerpoint of the wafer 5 can be geometrically determined in relation to the known position of the center 4c of the support blade 4a. The method<!-- EPO <DP n="16"> --> of centerpoint determination is more fully described hereinbelow with reference to Figs. 5 and 6.</p>
<p id="p0026" num="0026">Once the centerpoint of the wafer 5 is determined with relation to the center reference point 4c of the support blade 4a (i.e., point 4c is a point on support blade 4a concentric with the center point of a wafer 5 properly centered thereon), any position error is known. Thus the support blade 4a can be rotated as necessary and extended in the direction of the arrow 23 for precise placement of the center of the wafer 5 in the second chamber 3b so that the centerpoint of the wafer 5 is coincident with the selected destination location 65b. Precision placement of the wafer 5 in the second chamber 3b (Fig. 3d) is accomplished by moving the center of the support blade 4a into a position so that its center point 4c is offset from location 65b by an equal and opposite distance to any position error found for the wafer 5 on the support blade 4a. The wafer center will thereby be precisely placed at the preselected location 65b in the second chamber 3b. By way of example, if the center of the wafer is determined to be located at coordinate B+X,B+Y in relation to the centerpoint B of the support blade, the center P of the support blade 4a will be positioned in the second chamber 3b at coordinates C-X,C-Y in relation to the selected chamber point C, so that the center of the wafer 5 will be aligned with the preselected location 65b.<!-- EPO <DP n="17"> --></p>
<heading id="h0003"><b>a. Determination of The Wafer Centerpoint</b></heading>
<p id="p0027" num="0027">In Figs. 4 and 5 there are shown six trigger points P11, P12, P21, P22, P31 and P32 as determined by the corresponding output signals of an array of three pairs of sensor emitters-receivers. (While a four pair array is preferred as previously disclosed above for ease of description and illustration herein, an exemplary array of three emitter-receiver pairs is discussed). The method of determining the relative coordinates of the trigger points during rotation of the robot 4 is to use a single interrupt to the main system controller microprocessor 12.</p>
<p id="p0028" num="0028">Referring now to Figs. 2 and 6, the relative coordinates of the trigger points are first determined as a change in state of the sensors 6a-d and 20a-d. As the wafer 5 passes through each beam 30, 31, 32 and 33, a change of state occurs for each pair of emitters-receivers 6a-20a, 6b-20b, 6c-20c and 6d-20d, first at the leading edge of the wafer 5 and then at the trailing edge.</p>
<p id="p0029" num="0029">Fig. 6a is a graphical representation of the occurrence of the change of states of a three sensor array. As shown, the first three rows depict the changes of state of the sensor signals S1, S2 and S3, first at the leading edge of a wafer and second at the trailing edge. The signal forms S1, S2 and S3 depict, by way of example, the output signals corresponding to<!-- EPO <DP n="18"> --> receivers 6a, 6c, and 6d of Fig. 2. The second three rows depict the corresponding interrupt pulses I1, I2 and I3 generated at the leading and trailing edges of the signals S1-S3. The last row depicts a single pulse chain or interrupt line I1-3 showing the interrupt pulses for all six trigger points on the wafer 5.</p>
<p id="p0030" num="0030">The change of state of each sensor is detected by a circuit 30 (Fig. 6b) of the digital sensor interface card 11 (Fig. 1). At each change of state, the circuit 30 generates a pulse (I1, I2 or I3) which is directed through an "or" gate 31 which in turn sets an interrupt flip-flop 32 on the processing means (microprocessor) 12 (Fig. 1). When the flip-flop 32 receives the first pulse of pulse chain I1-3, Q is set to high (0 to 1) and a standard microprocessor interrupt service routine is executed to get the associated trigger point data and motor 17a-b step counts (present theta and R locations of stepper motors at detection). Once the data is retrieved and recorded, the microprocessor 12 resets the flip-flop (1 to 0) readying it to receive another pulse in chain I1-3. An input table is created and stored in memory corresponding to the digital representation of the changes of state of sensor signals S1, S2 and S3, and the theta angle measurement of each in terms of steps of the robot stepper motors 17a and 17b (Fig. 2). By way of example, the interrupt input table might look substantially as follows:<!-- EPO <DP n="19"> --> 
<tables id="tabl0001" num="0001">
<table frame="all">
<title>TABLE I</title>
<tgroup cols="5" colsep="1" rowsep="0">
<colspec colnum="1" colname="col1" colwidth="31.50mm"/>
<colspec colnum="2" colname="col2" colwidth="31.50mm"/>
<colspec colnum="3" colname="col3" colwidth="31.50mm"/>
<colspec colnum="4" colname="col4" colwidth="31.50mm"/>
<colspec colnum="5" colname="col5" colwidth="31.50mm"/>
<thead valign="top">
<row rowsep="1">
<entry namest="col1" nameend="col1" align="center">Table Index #</entry>
<entry namest="col2" nameend="col2" align="center">S1</entry>
<entry namest="col3" nameend="col3" align="center">S2</entry>
<entry namest="col4" nameend="col4" align="center">S3</entry>
<entry namest="col5" nameend="col5" align="center">Theta (in steps)</entry></row></thead>
<tbody valign="top">
<row>
<entry namest="col1" nameend="col1" align="right">0</entry>
<entry namest="col2" nameend="col2" align="right">0</entry>
<entry namest="col3" nameend="col3" align="right">0</entry>
<entry namest="col4" nameend="col4" align="right">0</entry>
<entry namest="col5" nameend="col5" align="right">------</entry></row>
<row>
<entry namest="col1" nameend="col1" align="right">1</entry>
<entry namest="col2" nameend="col2" align="right">1</entry>
<entry namest="col3" nameend="col3" align="right">0</entry>
<entry namest="col4" nameend="col4" align="right">0</entry>
<entry namest="col5" nameend="col5" align="right">15,000</entry></row>
<row>
<entry namest="col1" nameend="col1" align="right">2</entry>
<entry namest="col2" nameend="col2" align="right">1</entry>
<entry namest="col3" nameend="col3" align="right">1</entry>
<entry namest="col4" nameend="col4" align="right">0</entry>
<entry namest="col5" nameend="col5" align="right">17,500</entry></row>
<row>
<entry namest="col1" nameend="col1" align="right">3</entry>
<entry namest="col2" nameend="col2" align="right">1</entry>
<entry namest="col3" nameend="col3" align="right">1</entry>
<entry namest="col4" nameend="col4" align="right">1</entry>
<entry namest="col5" nameend="col5" align="right">20,400</entry></row>
<row>
<entry namest="col1" nameend="col1" align="right">4</entry>
<entry namest="col2" nameend="col2" align="right">0</entry>
<entry namest="col3" nameend="col3" align="right">1</entry>
<entry namest="col4" nameend="col4" align="right">1</entry>
<entry namest="col5" nameend="col5" align="right">25,200</entry></row>
<row>
<entry namest="col1" nameend="col1" align="right">5</entry>
<entry namest="col2" nameend="col2" align="right">0</entry>
<entry namest="col3" nameend="col3" align="right">1</entry>
<entry namest="col4" nameend="col4" align="right">0</entry>
<entry namest="col5" nameend="col5" align="right">27,510</entry></row>
<row rowsep="1">
<entry namest="col1" nameend="col1" align="right">6</entry>
<entry namest="col2" nameend="col2" align="right">0</entry>
<entry namest="col3" nameend="col3" align="right">0</entry>
<entry namest="col4" nameend="col4" align="right">0</entry>
<entry namest="col5" nameend="col5" align="right">29,330</entry></row></tbody></tgroup>
</table>
</tables></p>
<p id="p0031" num="0031">The interrupt processing routine need not indicate which sensor caused the interrupt, and only needs to save the signal data from the sensors. From the input table data, the polar coordinates of the six trigger points P11, P12, P21, P22, P31 and P32 can be determined relative to the known center 4d of rotation of the robot 4.</p>
<p id="p0032" num="0032">Referring now to Fig. 4, the center of the support blade 4a is at polar coordinate (r<sub>o</sub>,θ<sub>c</sub>), where r<sub>o</sub> is the calibrated radial distance from the center of robot rotation point 4d and θ<sub>c</sub> is the angular position of the support blade 4a about the robot center of rotation 4d. The radii r1, r2 and r3 of each of the trigger points P11, P12, P21, P22, P31 and P32 are known as they are the same radii as the sensor locations which are determined as a function of the robot calibration method described herein below. By determining the difference in angle Δθ in degrees as a function of motor steps from a known home position of the robot 4 to a trigger point position (Table I), the polar coordinates (r,θ) can be determined.<!-- EPO <DP n="20"> --></p>
<p id="p0033" num="0033">Once the polar coordinates of the six trigger points P11, P12, P21, P22, P31 and P32 are determined, they are converted to cartesian coordinates (X,Y) relative to the known center of rotation of the robot which is the selected origin point (0,0) of the coordinate system. By knowing at least three points on the circumference of the wafer 5, as depicted in Fig. 5, the centerpoint can be geometrically calculated. A line can be drawn from any two points on a circle. A line which passes through the midpoint of that line at a right angle passes through the center of the circle. Therefore, the coordinate point of intersection of two lines which bisect at right angles the midpoints of two lines passing through any two trigger points on the circumference of the wafer 5 is the centerpoint.</p>
<p id="p0034" num="0034">By way of illustration, Fig. 5 shows trigger points P11, P21, and P31 having corresponding cartesian coordinates (X1,Y1), (X2,Y2) and (X3,Y3) on the circumference of the wafer 5. If lines 25 and 26 are drawn connecting points (X1,Y1) with (X2,Y2) and (X2,Y2) with (X3,Y3), lines 27 and 28 drawn perpendicular to the midpoint of lines 25 and 26 will intersect at the coordinate centerpoint 29 of the wafer 5. The centerpoint 29 is derived from the slopes of lines 27 and 28 and the midpoints of lines 25 and 26.</p>
<p id="p0035" num="0035">In the determination of the centerpoint of a semiconductor wafer, the fact that detected trigger points may fall on a wafer flat must be taken into consideration. Therefore, the<!-- EPO <DP n="21"> --> centerfinder algorithm of this invention incorporates a strategy such as that shown in Table II to ensure that at least one accurate centerpoint will be determined from six detected trigger points, assuming two points fall on a flat: 
<tables id="tabl0002" num="0002">
<table frame="all">
<title>TABLE II</title>
<tgroup cols="4" colsep="1" rowsep="0">
<colspec colnum="1" colname="col1" colwidth="39.37mm"/>
<colspec colnum="2" colname="col2" colwidth="39.37mm"/>
<colspec colnum="3" colname="col3" colwidth="39.37mm"/>
<colspec colnum="4" colname="col4" colwidth="39.37mm"/>
<thead valign="top">
<row rowsep="1">
<entry namest="col1" nameend="col1" align="center">#</entry>
<entry namest="col2" nameend="col2" align="center">Bad</entry>
<entry namest="col3" nameend="col3" align="center">Remaining Points</entry>
<entry namest="col4" nameend="col4" align="center">Points to Determine Center</entry></row></thead>
<tbody valign="top">
<row>
<entry namest="col1" nameend="col1" align="right">1</entry>
<entry namest="col2" nameend="col2" align="right">1,2</entry>
<entry namest="col3" nameend="col3" align="right">3,4,5,6</entry>
<entry namest="col4" nameend="col4" align="right">3+5,4+6</entry></row>
<row>
<entry namest="col1" nameend="col1" align="right">2</entry>
<entry namest="col2" nameend="col2" align="right">1,3</entry>
<entry namest="col3" nameend="col3" align="right">2,4,5,6</entry>
<entry namest="col4" nameend="col4" align="right">2+5,4+6</entry></row>
<row>
<entry namest="col1" nameend="col1" align="right">3</entry>
<entry namest="col2" nameend="col2" align="right">1,4</entry>
<entry namest="col3" nameend="col3" align="right">2,3,5,6</entry>
<entry namest="col4" nameend="col4" align="right">2+6,3+5</entry></row>
<row>
<entry namest="col1" nameend="col1" align="right">4</entry>
<entry namest="col2" nameend="col2" align="right">1,5</entry>
<entry namest="col3" nameend="col3" align="right">2,3,4,6</entry>
<entry namest="col4" nameend="col4" align="right">2+6,4+6</entry></row>
<row>
<entry namest="col1" nameend="col1" align="right">5</entry>
<entry namest="col2" nameend="col2" align="right">1,6</entry>
<entry namest="col3" nameend="col3" align="right">2,3,4,5</entry>
<entry namest="col4" nameend="col4" align="right">2+3,3+5</entry></row>
<row>
<entry namest="col1" nameend="col1" align="right">6</entry>
<entry namest="col2" nameend="col2" align="right">2,3</entry>
<entry namest="col3" nameend="col3" align="right">1,4,5,6</entry>
<entry namest="col4" nameend="col4" align="right">1+4,5+6</entry></row>
<row>
<entry namest="col1" nameend="col1" align="right">7</entry>
<entry namest="col2" nameend="col2" align="right">2,4</entry>
<entry namest="col3" nameend="col3" align="right">1,3,5,6</entry>
<entry namest="col4" nameend="col4" align="right">1+3,3+6</entry></row>
<row>
<entry namest="col1" nameend="col1" align="right">8</entry>
<entry namest="col2" nameend="col2" align="right">2,5</entry>
<entry namest="col3" nameend="col3" align="right">1,3,4,6</entry>
<entry namest="col4" nameend="col4" align="right">1+4,4+6</entry></row>
<row>
<entry namest="col1" nameend="col1" align="right">9</entry>
<entry namest="col2" nameend="col2" align="right">2,6</entry>
<entry namest="col3" nameend="col3" align="right">1,3,4,5</entry>
<entry namest="col4" nameend="col4" align="right">1+3,3+5</entry></row>
<row>
<entry namest="col1" nameend="col1" align="right">10</entry>
<entry namest="col2" nameend="col2" align="right">3,4</entry>
<entry namest="col3" nameend="col3" align="right">1,2,5,6</entry>
<entry namest="col4" nameend="col4" align="right">2+6,5+6</entry></row>
<row>
<entry namest="col1" nameend="col1" align="right">11</entry>
<entry namest="col2" nameend="col2" align="right">3,5</entry>
<entry namest="col3" nameend="col3" align="right">1,2,4,6</entry>
<entry namest="col4" nameend="col4" align="right">2+6,6+4</entry></row>
<row>
<entry namest="col1" nameend="col1" align="right">12</entry>
<entry namest="col2" nameend="col2" align="right">3,6</entry>
<entry namest="col3" nameend="col3" align="right">1,2,4,5</entry>
<entry namest="col4" nameend="col4" align="right">2+5,4+5</entry></row>
<row>
<entry namest="col1" nameend="col1" align="right">13</entry>
<entry namest="col2" nameend="col2" align="right">4,5</entry>
<entry namest="col3" nameend="col3" align="right">1,2,3,6</entry>
<entry namest="col4" nameend="col4" align="right">2+3,3+6</entry></row>
<row>
<entry namest="col1" nameend="col1" align="right">14</entry>
<entry namest="col2" nameend="col2" align="right">4,6</entry>
<entry namest="col3" nameend="col3" align="right">1,2,3,5</entry>
<entry namest="col4" nameend="col4" align="right">1+3,3+5</entry></row>
<row rowsep="1">
<entry namest="col1" nameend="col1" align="right">15</entry>
<entry namest="col2" nameend="col2" align="right">5,6</entry>
<entry namest="col3" nameend="col3" align="right">1,2,3,4</entry>
<entry namest="col4" nameend="col4" align="right">2+3,3+4</entry></row></tbody></tgroup>
</table>
</tables></p>
<p id="p0036" num="0036">The first column of Table II is an index showing that from six trigger points detected by a three sensor array, there are fifteen (15) potential combinations of six points (non-repetitive), taken two at a time, that can be used to determine the centerpoint of a wafer under the assumption that at least two points are "bad," i.e., they fall on a wafer flat. The second column represents each combination of two points out of the six trigger points which are assumed to be bad. The third column lists the remaining combinations of four trigger points left to determine the centerpoint of the wafer. The fourth and last column depicts the combinations of two sets of two points<!-- EPO <DP n="22"> --> used to determine the centerpoint of the wafer as shown in Fig. 5. The combinations of two points having the largest angular separation are preferred to minimize error.</p>
<p id="p0037" num="0037">A subroutine is called and the information in the last column of Table II is retrieved for calculation of a "candidate" centerpoint of the wafer for each row #1-15. For each row, the radius to each of the six trigger points is calculated as illustrated in Fig. 4. The radii are then arranged in rank order. The largest four radii are compared and must fall within a designated measurement error. The remaining two radii must either be equal (on circle) or less than equal (on flats) to the largest four. If this comparison test is passed, the "candidate" centerpoint is marked "good" and the centerpoint recorded. If the comparison test is not passed, the candidate centerpoint is marked "bad" and discarded. After all candidate centers are tested, those marked "good" are averaged to return the best estimate for the actual centerpoint of the wafer. If none of the six trigger points fall on a flat, fifteen candidate centerpoints will be averaged. If only one of the six trigger points is on a flat, five or more candidate centerpoints will be averaged. If two of the six trigger points fall on a flat, one or more candidate centerpoints will be averaged. Any fewer than two "bad" points will give a more robust determination of the wafer centerpoint.<!-- EPO <DP n="23"> --></p>
<p id="p0038" num="0038">There may be a rare instance that a wafer may be so far off-center of the robot blade toward the end of the blade that the sensor closest to the robot center point will detect only the leading and trailing edges of the robot blade, or there may be an input data acquisition error due to noise. In these cases, where the calculated radii are determined to be substantially different than the known radius of the wafer, the trigger points are eliminated. In the alternative, an automatic recovery routine can be executed whereby the centerfinder method is performed by a second array, another backup centerfinder system and/or operator intervention.</p>
<heading id="h0004"><b>b. Centerfinder Routine</b></heading>
<p id="p0039" num="0039">The logic flow chart 34 of the centerfinder routine of this invention is shown in Fig. 7. The centerfinder routine begins with standard instructions (35) to command the system transfer robot to pick up a wafer from a wafer chamber. The robot knows the position of the center of the wafer chamber having previously been calibrated by the calibration method of this invention described below.</p>
<p id="p0040" num="0040">The next step (36) in the routine is to retract the robot blade and the wafer from the first wafer chamber and rotate the robot blade and wafer across the sensor array en route to the second, destination wafer chamber. The sensors detect the trigger points of the wafer and interrupt input Table I is created (37) by storing the associated step counts. Having<!-- EPO <DP n="24"> --> determined and stored the corresponding step counts in Table I, the trigger points are next converted (38) to cartesian (X,Y) coordinates using equations:<maths id="math0001" num=""><math display="block"><mrow><mtable><mtr><mtd><mrow><mtable><mtr><mtd><mrow><mtext>X = r*sinθ </mtext></mrow></mtd></mtr><mtr><mtd><mrow><mtext>Y = r*cosθ.</mtext></mrow></mtd></mtr></mtable></mrow></mtd></mtr></mtable></mrow></math><img id="ib0001" file="imgb0001.tif" wi="29" he="13" img-content="math" img-format="tif"/></maths></p>
<p id="p0041" num="0041">Once the trigger points' cartesian coordinates are calculated, the slopes "m" of the perpendicular bisector lines through pair of points are calculated (39) using the standard equation:<maths id="math0002" num=""><math display="block"><mrow><mtext>m1 = -(x1 - x2)/(y1 - y2).</mtext></mrow></math><img id="ib0002" file="imgb0002.tif" wi="49" he="5" img-content="math" img-format="tif"/></maths></p>
<p id="p0042" num="0042">After the slopes of the perpendicular bisectors are determined, the midpoint coordinates of the lines through pairs of trigger points are determined (40) using the equations:<maths id="math0003" num=""><math display="block"><mrow><mtable><mtr><mtd><mrow><mtable><mtr><mtd><mrow><mtext>Xm1 = (x1 + x2)/2; Ym1 = (y1 + y2)/2</mtext></mrow></mtd></mtr><mtr><mtd><mrow><mtext>Xm2 = (x2 + x3)/2; Ym2 = (y2 + y3)/2</mtext></mrow></mtd></mtr></mtable></mrow></mtd></mtr></mtable></mrow></math><img id="ib0003" file="imgb0003.tif" wi="82" he="14" img-content="math" img-format="tif"/></maths></p>
<p id="p0043" num="0043">Having determined the slopes of the perpendicular bisectors 27 and 28 (Fig. 5) and the midpoints n1 and n2 of the lines through the data points P, the candidate centerpoint (xc,yc) of the wafer is determined by calculating (41) the intersection of the bisector lines 27 and 28 by the following equations:<maths id="math0004" num=""><math display="block"><mrow><mtable><mtr><mtd><mrow><mtable><mtr><mtd><mrow><mtext>xc = ((m1)Xm1 - Ym1 - (m2)Xm2 + Ym2)/(m1 - m2)</mtext></mrow></mtd></mtr><mtr><mtd><mrow><mtext>yc = ((m2)Ym1 - m1m2(Xm1) - m1(Ym2) + m1m2(Xm2))/(m2 - m1).</mtext></mrow></mtd></mtr></mtable></mrow></mtd></mtr></mtable></mrow></math><img id="ib0004" file="imgb0004.tif" wi="135" he="14" img-content="math" img-format="tif"/></maths> This calculation of candidate centerpoints continues, repeating steps (39), (40) and (41) until all other point combinations and candidate centerpoints have been calculated.<!-- EPO <DP n="25"> --></p>
<p id="p0044" num="0044">Once all candidate centerpoints have been calculated, the distances (radii) between the candidate centerpoint and the trigger points are calculated and the largest four radii compared (43) to a predetermined threshold measurement error value ("Δ" e.g., 0.004). (The smallest two of the six radii are not used in the comparison as they are automatically assumed to be on wafer flats.) The radii from each candidate centerpoint to each of the trigger points is calculated using the equation:<maths id="math0005" num=""><math display="block"><mrow><msub><mrow><mtext mathvariant="italic">r</mtext></mrow><mrow><mtext mathvariant="italic">i</mtext></mrow></msub><mtext> = </mtext><msqrt><mtext>(</mtext><msub><mrow><mtext mathvariant="italic">xc - x</mtext></mrow><mrow><mtext mathvariant="italic">i</mtext></mrow></msub><msup><mrow><mtext>)</mtext></mrow><mrow><mtext>2</mtext></mrow></msup><mtext> + (</mtext><msub><mrow><mtext mathvariant="italic">yc - y</mtext></mrow><mrow><mtext mathvariant="italic">i</mtext></mrow></msub><msup><mrow><mtext>)</mtext></mrow><mrow><mtext>2</mtext></mrow></msup></msqrt></mrow></math><img id="ib0005" file="imgb0005.tif" wi="53" he="8" img-content="math" img-format="tif"/></maths> where i = 1, 2, 3, . . . 6 and where (x<sub>i</sub>,y<sub>i</sub>) are the cartesian coordinates of the detected trigger points. The calculated radii (e.g., r<sub>1</sub> to r<sub>6</sub>) are then arranged in rank order from largest to smallest (e.g., r<sub>a</sub>, r<sub>b</sub>, r<sub>c</sub>, r<sub>d</sub>, r<sub>e</sub>, r<sub>f</sub> where r<sub>a</sub> is the largest and r<sub>f</sub> is the smallest). The difference between the largest four is compared to the measurement error threshold Δ using the equation:<maths id="math0006" num=""><math display="block"><mrow><msub><mrow><mtext mathvariant="italic">r</mtext></mrow><mrow><mtext mathvariant="italic">a</mtext></mrow></msub><mtext> - </mtext><msub><mrow><mtext mathvariant="italic">r</mtext></mrow><mrow><mtext mathvariant="italic">d</mtext></mrow></msub><mtext> &lt; Δ</mtext></mrow></math><img id="ib0006" file="imgb0006.tif" wi="20" he="5" img-content="math" img-format="tif"/></maths> If the differential of the largest four radii is determined to be less than the threshold measurement error Δ, the candidate centerpoint is marked "good" and retained (45). If the differential is determined to be larger than the threshold measurement error Δ, the candidate centerpoint is marked "bad" (46) and discarded.<!-- EPO <DP n="26"> --></p>
<p id="p0045" num="0045">Once all candidate centerpoints have been marked, all "good" centerpoints are averaged (47) to produce the best estimate of the "true" centerpoint of the wafer. Once the averaged best centerpoint is obtained it is stored (48) for future reference when positioning the wafer for processing. The centerfinder program routine is then complete (49).</p>
<heading id="h0005"><b>3. Robot Calibration</b></heading>
<p id="p0046" num="0046">The accuracy of the wafer centerfinder method of this invention, and therefore the proper positioning of wafers for processing as described herein above, is dependent upon the proper calibration of the system robot. A reference point of the support blade 4a of the present invention is calibrated in relation to the centerpoint of each process chamber and a determined point along the robot blade extension curve (Fig. 10).</p>
<heading id="h0006"><b>a. Establish Repeatable Reference Point via Robot Home Sequence.</b></heading>
<p id="p0047" num="0047">To calibrate the support blade 4a in θ, a conventional robot home sequence routine is first executed to determine a repeatable home reference point for the center of the blade 4a. It will be understood that a robot home sequence routine can be readily created and tailored to a particular system, therefore the details of the home sequence routine are not discussed in depth herein. Once the home reference point is determined, it is stored for further use during the calibration and<!-- EPO <DP n="27"> --> centerfinding methods of this invention as disclosed herein. The home sequence is performed automatically and only once at system power up.</p>
<heading id="h0007"><b>b. Teaching the Robot the Process Chamber Centerpoints.</b></heading>
<p id="p0048" num="0048">Because the centerpoint of each wafer receiving chamber is a relevant point from which positioning corrections are made, the robot blade must be taught the centerpoint of each chamber.</p>
<p id="p0049" num="0049">The robot 4 is taught the centerpoint of each process chamber by positioning the centerpoint orifice 4c of the support blade 4a over a corresponding centerpoint orifice 65a or 65b (Figs. 3a-3d) of a chamber 3a or 3b. The positioning of the blade orifice 4c is performed manually by keyboard or remote control command. Once the blade center orifice 4c is positioned over the corresponding chamber center location 65a or 65b, a one-eighth inch peg (1 inch = 2,54 cm) is manually inserted through the robot blade orifice 4c into the corresponding chamber orifice 65a or 65b to verify the correct positioning of the robot blade 4a at the centerpoint of the processing chamber 3a or 3b. When correct positioning is verified, the step values of stepper motors 17a and 17b (Fig. 2) relative to the home sequence reference point are stored in non-volatile memory. This process chamber calibration sequence is repeated for each chamber and the associated step values recorded for further use in positioning the support blade 4a during wafer transfer. It should be noted that the teaching of the process chamber centerpoints need only<!-- EPO <DP n="28"> --> be performed once upon configuration of the processing system since the positions are stored in non-volatile memory for subsequent use even after system shut down (power off).</p>
<heading id="h0008"><b>c. Determining the Sensor Positions.</b></heading>
<p id="p0050" num="0050">The calibration of the support blade 4a is determined in relation to the known axis of rotation 4d of the robot center 4b and a determined point of a known radius on the non-linear extension curve (Fig. 10) of the system robot 4. This requires the determination of the location of at least one sensor 6a, 6b, 6c or 6d (Fig. 2), and then using that sensor of known position to calibrate the extension of the robot blade 4a. In the assembly of the system 1 (Fig. 1), the absolute positioning of the sensor beams 6a, 6b, 6c or 6d (Fig. 2), are not critical in the calibration of the robot blade 4a or the performance of the centerfinder method of this invention, as the calibration method can determine the position of each sensor regardless of sensor positioning.</p>
<p id="p0051" num="0051">Referring now back to Fig. 8, the positions of the exemplary photoelectric sensors 6d, 6c and 6b are found in terms of polar coordinates "r" and "θ". "R" is the distance of the sensor from the center of robot rotation 4d in inches. θ<sub>c</sub> is the angle of rotation of the support blade 4a from the known home sequence reference line θ<sub>0</sub>. By way of example, the θ<sub>c</sub> position of sensor 6d is determined by noting the angles θ<sub>a</sub> and<!-- EPO <DP n="29"> --> θ<sub>b</sub> where the edges 50a and 50b of support blade 4a break the beam 33. The middle point 51 between the two intercepts 50a and 50b corresponds to the angle θ<sub>c</sub> locating the sensor 6d. The robot support blade centerline 53 is shown aligned so that it is on a radial line from the center of rotation 4d of the robot 4. Thus, the location θ<sub>c</sub> (the angle from the robot home position θ<sub>0</sub>) of the sensor 6d is determined by summing the number of rotation steps to the detector intercept of each edge 50a and 50b from the robot home position θ<sub>0</sub> BS dividing by two. By this procedure, the angular position of each of the sensors can be determined.</p>
<p id="p0052" num="0052">To determine the radial distance "r" of a sensor from the center of robot rotation 4d, where the width "W" of the robot blade is a known system constant and having determined Theta, the following calculation is performed:<maths id="math0007" num=""><math display="block"><mrow><msub><mrow><mtext>r = W/2*sin(θ</mtext></mrow><mrow><mtext>w</mtext></mrow></msub><mtext>/2)</mtext></mrow></math><img id="ib0007" file="imgb0007.tif" wi="33" he="6" img-content="math" img-format="tif"/></maths>    where θ<sub>w</sub> is that angle shown in Fig. 8.</p>
<p id="p0053" num="0053">Once the radial distances "r" are determined and combined with the correspcmding angles θ<sub>c</sub>, the exact location of each sensor is known. Calibration values of the sensor positions are then stored in non-volatile memory. In practice, it has been found that by approaching the sensor from both sides and taking the middle point as the actual sensor position, sensor hysteresis can be removed.<!-- EPO <DP n="30"> --></p>
<heading id="h0009"><b>d. Correlating the Robot Blade Position to the Robot Extension Curve.</b></heading>
<p id="p0054" num="0054">Referring now to Figs. 9 and 10, the robot blade 4a is calibrated in R in relation to the known center of rotation 4d of the robot. This is accomplished by correlating the support blade 4a in its retracted position as shown at A in Fig. 9 to one point of a known radius on the nonlinear extension curve of the robot 4 as shown in Fig. 10.</p>
<p id="p0055" num="0055">There is preferably located on the support blade 4a a one-eighth inch diameter orifice 4c disposed in a "center" position intended to be concentric with the centerpoint of a wafer 5 when the wafer 5 is exactly centered on the support blade 4a. The location of the support blade 4a is determined by first rotating the robot 4 so that the centerline of the blade (the longitudinal axis) is located at the angular coordinate of a selected sensor (see Fig. 9). The support blade 4a is then radially extended to positions B<sub>1</sub> and B<sub>2</sub> from its retracted position A such that the leading and trailing edges 54 and 55 (trigger points) of the centerpoint hole 4c are detected, the radial extension being exaggerated t in Fig. 9 for clarity. The relative step counts from the robot home position A to the trigger points 54 and 55 are recorded. The average of the step counts to these two points locates the center of the orifice 4c (and thus the "center" of the blade 4a) and is stored in non-volatile memory.<!-- EPO <DP n="31"> --></p>
<p id="p0056" num="0056">By determining the required number of steps it takes to place the robot blade centerpoint orifice 4c at a radial distance of the selected sensor, one corollary point on the robot extension curve (Fig. 10) can be determined. For example, by plugging into the nonlinear robot extension equation (Fig. 10) the radial distance in inches of the selected sensor, we can determine the angular rotation of the stepper motor 17a (Fig. 2) required to move the support blade 4a from its retracted position to a position with its center point aligned with the radial line of a sensor. Because it is known that there are 200,000 steps for a full rotation (2π radians) of the robot 4, the extension or retraction from the radial line of the selected sensor to any selected point can be calculated using the robot extension equation. When this information is combined with the wafer centerfinder system and method disclosed hereinabove, a wafer 5 can be precisely positioned at any selected point.</p>
<heading id="h0010"><b>c. Calibration Routine</b></heading>
<p id="p0057" num="0057">The flow chart of the calibration routine 56 of this invention is shown in Fig. 11. The calibration routine begins with standard instructions (57) to drive motors 17a and 17b (Fig. 2) of the robot 4 from its home, blade retracted position and to rotate the device to a position such that the wafer support blade 4a passes through the beam of a selected sensor 57 for detection of the edges (points 50a and 50b as shown in Fig. 8) of each parallel side of the blade.<!-- EPO <DP n="32"> --></p>
<p id="p0058" num="0058">The routine then records (58) the motor step values at which the selected sensor detects the leading and trailing edge points 50a and 50b of the blade 4a. The recorded step values are then averaged (59) to determine the angle θ<sub>c</sub> (Fig. 8) through which the robot must rotate to center the blade axis 65 (Fig. 9) at the sensor. This θ<sub>c</sub> value is then stored.</p>
<p id="p0059" num="0059">The radius of the selected sensor is then calculated (60) using the determined θ<sub>c</sub> (angle from the robot home position θ<sub>0</sub>) and the known blade width W using the following equation:<maths id="math0008" num=""><math display="block"><mrow><msub><mrow><mtext>r = w/(2sin(θ</mtext></mrow><mrow><mtext>c</mtext></mrow></msub><mtext>/2))</mtext></mrow></math><img id="ib0008" file="imgb0008.tif" wi="31" he="6" img-content="math" img-format="tif"/></maths>    where:
<ul id="ul0002" list-style="none" compact="compact">
<li>θ<sub>c</sub> = 360((# steps leading edge)-(# steps trailing edge))/200,000; and</li>
<li>w = known blade width (a constant) in inches.</li>
</ul></p>
<p id="p0060" num="0060">To determine the corollary position coordinates of the center point 4c of the support blade 4a to a position along the robot extension curve (Fig. 10), instructions (61) are next provided to rotate robot 4 such that the centerline 65 of robot blade 4a is aligned with the angular position of the selected sensor. The support blade 4a is then extended (62) such that the blade orifice 4c passes through the selected sensor beam. The sensor detects the leading and trailing edges 54 and 55 (Fig. 9) of the centerpoint orifice 4c, and the relative step values are then averaged and the averaged value stored (63) in non-volatile memory.<!-- EPO <DP n="33"> --></p>
<p id="p0061" num="0061">The associated theta on the robot extension curve corresponding to the average step value of the centerpoint of the blade orifice 4c is then calculated using the robot equation:<maths id="math0009" num=""><math display="block"><mrow><msup><mrow><mtext>θ = tan</mtext></mrow><mrow><mtext>-1</mtext></mrow></msup><mtext> </mtext><mfrac><mrow><mtext mathvariant="italic">H</mtext></mrow><mrow><mtext>(</mtext><mtext mathvariant="italic">L-I</mtext><mtext>)</mtext></mrow></mfrac><msup><mrow><mtext> + cos</mtext></mrow><mrow><mtext>-1</mtext></mrow></msup><mtext> </mtext><mfenced open="[" close="]"><mrow><mfrac><mrow><msup><mrow><mtext mathvariant="italic">G</mtext></mrow><mrow><mtext>2</mtext></mrow></msup><mtext>-</mtext><msup><mrow><mtext mathvariant="italic">F</mtext></mrow><mrow><mtext>2</mtext></mrow></msup><mtext>-</mtext><msup><mrow><mtext mathvariant="italic">H</mtext></mrow><mrow><mtext>2</mtext></mrow></msup><mtext>-(</mtext><mtext mathvariant="italic">L-I</mtext><msup><mrow><mtext>)</mtext></mrow><mrow><mtext>2</mtext></mrow></msup></mrow><mrow><mtext>-2</mtext><mtext mathvariant="italic">F</mtext><msqrt><msup><mrow><mtext mathvariant="italic">H</mtext></mrow><mrow><mtext>2</mtext></mrow></msup><mtext>+(</mtext><mtext mathvariant="italic">L-I</mtext><msup><mrow><mtext>)</mtext></mrow><mrow><mtext>2</mtext></mrow></msup></msqrt></mrow></mfrac></mrow></mfenced></mrow></math><img id="ib0009" file="imgb0009.tif" wi="83" he="14" img-content="math" img-format="tif"/></maths>    where:
<ul id="ul0003" list-style="none" compact="compact">
<li>F and G = the robot leg dimension constants as shown in Fig. 9;</li>
<li>H and I = the robot blade dimension constants as shown in Fig. 9; and<maths id="math0010" num=""><math display="block"><mrow><mtext mathvariant="italic">L</mtext><mtext> = </mtext><mtext mathvariant="italic">I-F</mtext><mtext> cos (180 - θ) + </mtext><msqrt><msup><mrow><mtext mathvariant="italic">G</mtext></mrow><mrow><mtext>2</mtext></mrow></msup><mtext>-[(</mtext><mtext mathvariant="italic">F</mtext><msup><mrow><mtext> sin [180 - θ])-1]</mtext></mrow><mrow><mtext>2</mtext></mrow></msup></msqrt></mrow></math><img id="ib0010" file="imgb0010.tif" wi="99" he="8" img-content="math" img-format="tif"/></maths> where</li>
<li>L = the radial distance of the selected sensor in inches from the center of rotation of the robot 4d.</li>
</ul></p>
<p id="p0062" num="0062">Knowing the corresponding θ, the calibration process is complete, i.e., a calibration reference point is determined on the non-linear robot extension curve Fig. 10 from which all movements of the robot blade can be computed using the robot extension equations. It should be noted that the calibration routine need be performed only once after system configuration<!-- EPO <DP n="34"> --> and initial power up as the calibration reference point is stored in nonvolatile memory.</p>
</description><!-- EPO <DP n="35"> -->
<claims id="claims01" lang="en">
<claim id="c-en-01-0001" num="0001">
<claim-text>A system for accurately positioning an object at a preselected location comprising object transfer means including a support (4a) moveable in a predetermined arcuate path (66) about a fixed axis (4d) and along linear paths extending radially of the fixed axis around said arcuate path for carrying said object (5) between a first location and a preselected second location, the position of said moveable support being known at all times, and the position of said object relative to said moveable support initially being unknown, an array of optical sensors (6) including at least two sensors disposed between said locations, said sensors being operative to detect a plurality of points on the perimeter of the object (5) as it is carried by the moveable support (4a) between said locations to generate signals from which the position of said object relative to the known position of said moveable support can be determined, and means responsive to the signals and operative to position the object at said preselected location; characterised in that at least two sensors of said array of sensors (6) are disposed along an axis extending generally transverse to said arcuate path of the moveable support to detect said perimeter points of the object as it moves through said arcuate path.</claim-text></claim>
<claim id="c-en-01-0002" num="0002">
<claim-text>An object processing apparatus having an object positioning system as claimed in claim 1, characterised in that the apparatus has a central transfer chamber (2), a plurality of peripheral chambers (3a,3b...) positioned around the periphery of said central transfer chamber, said moveable object support (4a) being moveable within said central<!-- EPO <DP n="36"> --> transfer chamber along said arcuate path (66) between said peripheral chambers to load, move and unload said object (5) to and from said peripheral chambers, the object positioning system further comprising means for providing object support reference signals indicative of the position of an object support reference point (4c) and said sensors (6) being triggered by the leading and trailing edges of the moving object (5) as it passes therethrough to develop corresponding object position signals from which an object position reference point can be determined, and means being provided responsive to the object support reference signals and the object position signals operative to calculate the location of the object position relative to the object support, and further operative to move said object support to a corresponding offset position relative to said preselected location so as to position said object at said preselected location in one of the peripheral chambers.</claim-text></claim>
<claim id="c-en-01-0003" num="0003">
<claim-text>A processing apparatus as claimed in claim 2, characterised in that the object comprises a generally circular semiconductor wafer (5), the centerpoint (29) of which is concentric with said object support reference point (4c) when said wafer is perfectly positioned on said object support (4).</claim-text></claim>
<claim id="c-en-01-0004" num="0004">
<claim-text>A processing apparatus as claimed in claim 3, characterised in that the means responsive to said object position signals includes a computer system (12) programmed to control the transfer of said wafer from chamber to chamber.</claim-text></claim>
<claim id="c-en-01-0005" num="0005">
<claim-text>A processing apparatus as claimed in claim<!-- EPO <DP n="37"> --> 4, characterised in that the computer system includes control logic means (12) operative to determine the wafer position and object support reference points and to calculate the relative location of said wafer to said object support, means (13,15,16a,16b,17a,17b) for controlling the movements of the object support, and means (11) for interfacing the control logic means and the sensor array.</claim-text></claim>
<claim id="c-en-01-0006" num="0006">
<claim-text>A processing apparatus as claimed in any of claims 2 to 5, characterised in that said means providing object support reference signals includes a memory prerecorded to contain said object support reference signals.</claim-text></claim>
<claim id="c-en-01-0007" num="0007">
<claim-text>A processing apparatus as claimed in claims 2 to 6, characterised in that said object support includes an elongated blade (4) having an attribute (4c) detectible by said sensors.</claim-text></claim>
<claim id="c-en-01-0008" num="0008">
<claim-text>A processing apparatus as claimed in claim 7, characterised in that the detectible attribute of the object support (4) comprises a centrally disposed orifice (4c) provided in said blade having edges detectible by said sensors (6).</claim-text></claim>
<claim id="c-en-01-0009" num="0009">
<claim-text>A processing apparatus as claimed in any of claims 3 to 8, characterised in that said preselected location identifies a wafer position in one of the peripheral chambers.</claim-text></claim>
<claim id="c-en-01-0010" num="0010">
<claim-text>A processing apparatus as claimed in any of claims 2 to 9, characterised in that said sensors (6) are located outside of said central transfer chamber (2).</claim-text></claim>
<claim id="c-en-01-0011" num="0011">
<claim-text>A method for accurately positioning an<!-- EPO <DP n="38"> --> object having a centerpoint from a first location to a known preselected second location by detecting the relative position of the object with respect to a moveable object support upon which said object is supported and which is moveable in a predetermined arcuate path about a fixed axis and along linear paths extending radially of the axis around the arcuate path between said first and second locations, the position of said moveable object support at all times being known, and the position of said object relative to said moveable object support initially being unknown; the method comprising the steps of providing an array of sensors including at least two sensors mounted along an axis generally transverse to the path, detecting perimeter points along the perimeter of the object by moving the moveable object support along the path, thereby triggering the sensors to generate object signals from which the position of said object can be determined relative to the known position of said moveable object support, calculating the object position relative to the known position of the moveable object support from the object signals, and moving said moveable object support and said object supported thereon to the second preselected location so that said object is coincident with said selected location; characterised in that the object support is rotated about said fixed axis to move the object through said arcuate path, and in that during movement from said first to said second location said sensors are mounted along an axis generally transverse to the arcuate path to detect said perimeter points of the object as it moves through the arcuate path.</claim-text></claim>
<claim id="c-en-01-0012" num="0012">
<claim-text>A method as in claim 11, characterised in that the step of calculating the object position<!-- EPO <DP n="39"> --> comprises the steps of:
<claim-text>a. determining at least three polar coordinate points of the object from the known position of the moveable object support and the object position signals;</claim-text>
<claim-text>b. converting said polar coordinate points to cartesian coordinate points;</claim-text>
<claim-text>c. calculating the slopes of imaginary perpendicular bisector lines to imaginary lines joining at least two pairs of the cartesian coordinate points;</claim-text>
<claim-text>d. calculating the midpoints of the imaginary lines joining said pairs of said cartesian coordinate points;</claim-text>
<claim-text>e. calculating the points of intersection of the perpendicular bisector lines from the calculated slopes and midpoints;</claim-text>
<claim-text>f. repeating the steps c., d., and e., for all pairs of said cartesian coordinate points;</claim-text>
<claim-text>g. comparing the calculated points of intersection to a specified numerical range; and</claim-text>
<claim-text>h. calculating the average of the points of intersection falling within the specified numerical range.</claim-text></claim-text></claim>
<claim id="c-en-01-0013" num="0013">
<claim-text>A method as claimed in claim 11 or claim 12 for determining the position of said moveable object<!-- EPO <DP n="40"> --> support having a known width, a detectible attribute and a home reference position relative to a known location in a processing system, said moveable support being disposed within a transfer chamber having a plurality of chambers positioned around the periphery of said transfer chamber and moveable between said processing chambers along said arcuate path, comprising the steps of detecting the position of at least one of said sensors for providing sensor position signals, detecting the position of the moveable object support for generating object support position signals, and calculating the position of the object support relative to the known location from said sensor position signals and object support position signals.</claim-text></claim>
<claim id="c-en-01-0014" num="0014">
<claim-text>A method as in claim 13, characterised in that the step of detecting the position of said at least one of said sensors comprises the steps of detecting a leading and trailing edge of the moveable object support by moving said moveable object support along the arcuate path, thereby triggering the sensor to generate signals representing point positions of said detected leading and trailing edges, recording said point positions in terms of values, calculating the average of said values operative to give a rotational angle of said sensor position, and calculating the radius of the sensor position from the calculated rotational angle and the known width of the moveable object support.</claim-text></claim>
<claim id="c-en-01-0015" num="0015">
<claim-text>A method as in claim 14, characterised in that the step of detecting the position of the moveable object support comprises the steps of rotating the moveable object support so that the<!-- EPO <DP n="41"> --> position of said moveable object support is concurrent with the radial axis of the sensor position, extending the moveable object support along the radial axis of the sensor causing said sensor to detect a leading and a trailing edge of said detectible attribute of said moveable object support and operative to generate associated position signals, recording said position signals in terms of values, and calculating a point along said arcuate path of said support equal to said values as a relative positioning value for moving said object support relative to said selected location.</claim-text></claim>
</claims><!-- EPO <DP n="42"> -->
<claims id="claims02" lang="de">
<claim id="c-de-01-0001" num="0001">
<claim-text>System zum genauen Positionieren eines Gegenstands an einem vorher ausgewählten Ort,
<claim-text>- mit einer Gegenstandsüberführungseinrichtung, welche einen Träger (4a) aufweist, der auf einem vorgegebenen gekrümmten Weg (66) um eine feststehende Achse (4d) und längs linearer Wege bewegbar ist, die sich radial von der feststehenden Achse um den gekrümmten Weg herum erstrecken, um den Gegenstand (5) zwischen einem ersten Ort und einem ausgewählten zweiten Ort zu transportieren, wobei die Position des beweglichen Trägers zu jeder Zeit bekannt und die Position des Gegenstands bezüglich des beweglichen Trägers am Anfang unbekannt ist, und</claim-text>
<claim-text>- mit einer Anordnung von optischen Sensoren (6), die wenigstens zwei Sensoren aufweist, die zwischen den genannten Orten angeordnet sind, wobei die Sensoren so wirken, daß sie eine Vielzahl von Punkten am Umfang des Gegenstandes (5) erfassen, wenn dieser von dem beweglichen Träger (4a) zwischen den genannten Orten transportiert wird, und Signale erzeugen, aus denen die Position des Gegenstands bezüglich der bekannten Position des beweglichen Trägers bestimmt werden kann, und</claim-text>
<claim-text>- mit Einrichtungen, die auf die Signale ansprechen und so wirken, daß der Gegenstand an dem vorher ausgewählten Ort positioniert wird,<br/>
dadurch gekennzeichnet,</claim-text>
<claim-text>- daß wenigstens zwei Sensoren der Anordnung von Sensoren (6) längs einer Achse angeordnet sind, die sich insgesamt quer zu dem gekrümmten Weg des beweglichen Trägers erstreckt, um die Umfangspunkte des Gegenstandes zu erfassen, wenn er sich auf dem gekrümmten Weg bewegt.</claim-text><!-- EPO <DP n="43"> --></claim-text></claim>
<claim id="c-de-01-0002" num="0002">
<claim-text>Vorrichtung zum Behandeln eines Gegenstands mit einem System zum Positionieren eines Gegenstands nach Anspruch 1, dadurch gekennzeichnet,
<claim-text>- daß die Vorrichtung eine zentrale Überführungskammer (2) und eine Vielzahl von Umfangskammern (3a, 3b...) aufweist, die um den Umfang der zentralen Überführungskammer herum angeordnet sind, wobei der bewegliche Gegenstandsträger (4a) in der zentralen Überführungskammer längs des gekrümmten Wegs (66) zwischen den Umfangskammern bewegbar ist, um den Gegenstand in die Umfangskammern und aus den Umfangskammern zu laden, zu bewegen und zu entladen,</claim-text>
<claim-text>- daß das System zum Positionieren eines Gegenstands weiterhin Einrichtungen zum Bereitstellen von Gegenstandsträger-Bezugssignalen aufweist, die die Position eines Gegenstandsträger-Bezugspunkts (4c) kennzeichnen, wobei die Sensoren (6) durch die Vorder- und Hinterkante des sich bewegenden Gegenstandes (5) bei seinem Durchgang gestartet werden, um entsprechende Gegenstandspositionssignale zu erzeugen, aus denen ein Gegenstandspositions-Bezugspunkt bestimmt werden kann, und</claim-text>
<claim-text>- daß Einrichtungen vorgesehen sind, die auf die Gegenstandsträger-Bezugssignale und die Gegenstandspositionssignale ansprechen und so arbeiten, daß der Ort der Gegenstandsposition bezüglich des Gegenstandsträgers berechnet wird, und ferner so wirken, daß der Gegenstandsträger zu einer entsprechenden bezüglich des vorher ausgewählten Ortes versetzten Position bewegt wird, um den Gegenstand an dem vorher ausgewählten Ort in einer der Umfangskammern zu positionieren.</claim-text></claim-text></claim>
<claim id="c-de-01-0003" num="0003">
<claim-text>Vorrichtung zum Behandeln nach Anspruch 2, dadurch gekennzeichnet, daß der Gegenstand einen insgesamt kreisförmigen Halbleiterwafer (5) aufweist, dessen Mittelpunkt (29) konzentrisch zum Gegenstandsträger-Bezugspunkt (4c) ist, wenn der Wafer auf dem Gegenstandsträger (4) genau positioniert ist.<!-- EPO <DP n="44"> --></claim-text></claim>
<claim id="c-de-01-0004" num="0004">
<claim-text>Vorrichtung zum Behandeln nach Anspruch 3, dadurch gekennzeichnet, daß die Einrichtungen, die auf die Gegenstandspositionssignale ansprechen, ein Rechnersystem (12) aufweisen, das so programmiert ist, daß die Überführung des Wafers von Kammer zu Kammer gesteuert wird.</claim-text></claim>
<claim id="c-de-01-0005" num="0005">
<claim-text>Vorrichtung zum Behandeln nach Anspruch 4, dadurch gekennzeichnet, daß das Rechnersystem eine Steuerlogikeinrichtung (12), die so arbeitet, daß die Waferposition und die Gegenstandsträger-Bezugspunkte bestimmt und der relative Ort des Wafers zu dem Gegenstandsträger berechnet wird, Einrichtungen (13, 15, 16a, 16b, 17a, 17b) zum Steuern der Bewegungen des Gegenstandsträgers und Einrichtungen (11) zur Schnittstellenbildung für die Steuerlogikeinrichtungen und die Sensoranordnung hat.</claim-text></claim>
<claim id="c-de-01-0006" num="0006">
<claim-text>Vorrichtung zum Behandeln nach einem der Ansprüche 2 bis 5, dadurch gekennzeichnet, daß die Einrichtungen, die Gegenstandsträger-Bezugssignale bereitstellen, einen Speicher mit einer Voraufzeichnung aufweisen, so daß er die Gegenstandsträger-Bezugssignale enthält.</claim-text></claim>
<claim id="c-de-01-0007" num="0007">
<claim-text>Vorrichtung zum Behandeln nach einem der Ansprüche 2 bis 6, dadurch gekennzeichnet, daß der Gegenstandsträger ein langgestrecktes Blatt (4) aufweist, das einen Zusatz (4c) hat, der von den Sensoren anmeßbar ist.</claim-text></claim>
<claim id="c-de-01-0008" num="0008">
<claim-text>Vorrichtung zum Behandeln nach Anspruch 7, dadurch gekennzeichnet, daß der anmeßbare Zusatz des Gegenstandsträgers (4) eine zentral angeordnete Öffnung (4c) aufweist, die in dem Blatt vorgesehen ist und Ränder hat, die von den Sensoren (6) anmeßbar sind.</claim-text></claim>
<claim id="c-de-01-0009" num="0009">
<claim-text>Vorrichtung zum Behandeln nach einem der Ansprüche 3 bis 8, dadurch gekennzeichnet, daß der vorher ausgewählte Ort für eine Waferposition in einer der Umfangskammern steht.<!-- EPO <DP n="45"> --></claim-text></claim>
<claim id="c-de-01-0010" num="0010">
<claim-text>Vorrichtung zum Behandeln nach einem der Ansprüche 2 bis 9, dadurch gekennzeichnet, daß sich die Sensoren (6) auf der Außenseite der zentralen Überführungskammer (2) befinden.</claim-text></claim>
<claim id="c-de-01-0011" num="0011">
<claim-text>Verfahren zum genauen Positionieren eines einen Mittelpunkt aufweisenden Gegenstands von einem ersten Ort zu einem bekannten, vorher ausgewählten zweiten Ort durch Erfassen der relativen Position des Gegenstands bezüglich eines beweglichen Gegenstandsträgers, auf welchem der Gegenstand getragen wird und der auf einem vorgegebenen gekrümmten Weg um eine feststehende Achse und längs linearer Wege bewegbar ist, die sich radial zu der Achse um den gekrümmten Weg herum zwischen dem ersten und zweiten Ort erstrecken, wobei die Position des beweglichen Gegenstandsträgers jederzeit bekannt und die Position des Gegenstands bezüglich des beweglichen Gegenstandsträgers am Anfang unbekannt ist und wobei das Verfahren die Schritte aufweist,
<claim-text>- eine Anordnung von Sensoren mit wenigstens zwei Sensoren vorzusehen, die längs einer Achse angeordnet sind, die zu dem Weg insgesamt quer verläuft,</claim-text>
<claim-text>- Umfangspunkte längs des Umfangs des Gegenstandes dadurch zu erfassen, daß der bewegliche Gegenstandsträger längs des Wegs bewegt wird, wodurch die Sensoren gestartet werden, so daß sie Gegenstandssignale erzeugen, aus denen die Position des Gegenstands bezüglich der bekannten Position des beweglichen Gegenstandsträgers bestimmt werden kann,</claim-text>
<claim-text>- die Gegenstandsposition bezüglich der bekannten Position des beweglichen Gegenstandsträgers aus den Gegenstandssignalen zu berechnen und</claim-text>
<claim-text>- den beweglichen Gegenstandsträger und den darauf getragenen Gegenstand zu dem zweiten vorher ausgewählten Ort so zu bewegen, daß der Gegenstand zu dem ausgewählten Ort koinzident wird,<br/>
dadurch gekennzeichnet,<!-- EPO <DP n="46"> --></claim-text>
<claim-text>- daß der Gegenstandsträger um die feststehende Achse gedreht wird, um den Gegenstand auf dem gekrümmten Weg zu bewegen, und</claim-text>
<claim-text>- daß während der Bewegung von dem ersten Ort zum zweiten Ort die Sensoren längs einer Achse angeordnet sind, die insgesamt quer zu dem gekrümmten Weg verläuft, um die Umfangspunkte des Gegenstandes zu erfassen, wenn er sich auf dem gekrümmten Weg bewegt.</claim-text></claim-text></claim>
<claim id="c-de-01-0012" num="0012">
<claim-text>Verfahren nach Anspruch 11, dadurch gekennzeichnet, daß der Schritt der Berechnung der Gegenstandsposition die Schritte aufweist
<claim-text>a) Bestimmen von wenigstens drei Polarkoordinatenpunkten des Gegenstands aus der bekannten Position des beweglichen Gegenstandsträgers und aus den Gegenstandspositionssignalen,</claim-text>
<claim-text>b) Umwandeln der Polarkoordinatenpunkte in kartesische Koordinatenpunkte,</claim-text>
<claim-text>c) Berechnen der Neigungen von imaginären senkrechten Halbierungslinien zu imaginären Linien, welche wenigstens zwei Paare der kartesischen Koordinatenpunkte verbinden,</claim-text>
<claim-text>d) Berechnen der Mittelpunkte der imaginären Linien, welche die Paare der kartesischen Koordinatenpunkte verbinden,</claim-text>
<claim-text>e) Berechnen der Schnittpunkte der senkrechten Halbierungslinien aus den berechneten Neigungen und Mittelpunkten,</claim-text>
<claim-text>f) Wiederholen der Schritte c), d) und e) für alle Paare von kartesischen Koordinatenpunkten,</claim-text>
<claim-text>g) Vergleichen der berechneten Schnittpunkte mit einem spezifizierten numerischen Bereich, und</claim-text>
<claim-text>h) Berechnen des Mittels der Schnittpunkte, die in den spezifizierten numerischen Bereich fallen.</claim-text><!-- EPO <DP n="47"> --></claim-text></claim>
<claim id="c-de-01-0013" num="0013">
<claim-text>Verfahren nach Anspruch 11 oder Anspruch 12 zum Bestimmen der Position des beweglichen Gegenstandsträgers, der eine bekannte Breite, einen anmeßbaren Zusatz und eine Heimbezugsposition bezüglich eines bekannten Orts in einem Behandlungssystem hat,
<claim-text>- wobei der bewegliche Träger in einer Überführungskammer angeordnet ist, die eine Vielzahl von Kammern aufweist, die um den Umfang der Überführungskammer herum angeordnet sind, und zwischen den Behandlungskammern längs des gekrümmten Weges bewegbar ist, und</claim-text>
<claim-text>- wobei das Verfahren die Schritte aufweist,
<claim-text>-- die Position wenigstens eines der Sensoren zur Bereitstellung von Sensorpositionssignalen zu erfassen,</claim-text>
<claim-text>-- die Position des beweglichen Gegenstandsträgers zur Erzeugung von Gegenstandsträger-Positionssignalen zu erfassen und</claim-text>
<claim-text>-- die Position des Gegenstandsträgers bezüglich des bekannten Orts aus den Sensorpositionssignalen und den Gegenstandsträger-Positionssignalen zu berechnen.</claim-text></claim-text></claim-text></claim>
<claim id="c-de-01-0014" num="0014">
<claim-text>Verfahren nach Anspruch 13, dadurch gekennzeichnet, daß der Schritt zum Erfassen der Position des wenigstens einen Sensors die Schritte aufweist,
<claim-text>- eine Vorder- und Hinterkante des beweglichen Gegenstandsträgers durch Bewegen des beweglichen Gegenstandsträgers längs des gekrümmten Weges zu erfassen, wodurch der Sensor gestartet wird, um Signale zu erzeugen, die Punktpositionen der erfaßten Vorder- und Hinterkante darstellen,</claim-text>
<claim-text>- die Punktpositionen in Ausdrücken von Werten aufzuzeichnen,</claim-text>
<claim-text>- das Mittel der Werte mit der Wirkung zu berechnen, daß sie einen Drehwinkel der Sensorposition geben, und</claim-text>
<claim-text>- den Radius der Sensorposition aus dem berechneten Drehwinkel und aus der bekannten Breite des beweglichen Gegenstandsträgers zu berechnen.</claim-text><!-- EPO <DP n="48"> --></claim-text></claim>
<claim id="c-de-01-0015" num="0015">
<claim-text>Verfahren nach Anspruch 14, dadurch gekennzeichnet, daß der Schritt des Erfassens der Position des beweglichen Gegenstandsträgers die Schritte aufweist,
<claim-text>- den beweglichen Gegenstandsträger so zu drehen, daß die Position des beweglichen Gegenstandsträgers mit der radialen Achse der Sensorposition zusammenfällt,</claim-text>
<claim-text>- den beweglichen Gegenstandsträger längs der radialen Achse des Sensor auszufahren, was den Sensor veranlaßt, eine Vorder- und eine Hinterkante des anmeßbaren Zusatzes des beweglichen Gegenstandsträgers zu erfassen mit der Wirkung, daß zugeordnete Positionssignale erzeugt werden,</claim-text>
<claim-text>- die Positionssignale in Ausdrücken von Werten aufzuzeichnen und</claim-text>
<claim-text>- einen Punkt längs des gekrümmten Weges des Trägers, der gleich den Werten ist, als einen relativen Positionierwert zum Bewegen des Gegenstandsträgers bezüglich des ausgewählten Ortes zu berechnen.</claim-text></claim-text></claim>
</claims><!-- EPO <DP n="49"> -->
<claims id="claims03" lang="fr">
<claim id="c-fr-01-0001" num="0001">
<claim-text>Système pour positionner avec précision un objet en un emplacement présélectionné, comprenant des moyens de transfert d'objet comportant un support (4a) mobile selon un trajet en forme d'arc prédéterminé (66) autour d'un axe fixe (4d) et le long de trajets linéaires s'étendant radialement par rapport à l'axe fixe autour dudit trajet en forme d'arc pour porter ledit objet (5) entre un premier emplacement et un deuxième emplacement présélectionné, la position dudit support mobile étant connue à tous moments, et la position dudit objet par rapport audit support mobile étant initialement inconnue, un groupement de détecteurs optiques (6) comprenant au moins deux détecteurs disposés entre lesdits emplacements, lesdits détecteurs agissant de façon à détecter une pluralité de points sur le périmètre de l'objet (5) lorsqu'il est porté par le support mobile (4a) entre lesdits emplacements de façon à générer des signaux à partir desquels la position dudit objet par rapport à la position connue dudit support mobile peut être déterminée, et des moyens réagissant aux signaux et agissant de façon à positionner l'objet dans ledit emplacement présélectionné ; caractérisé en ce qu'au moins deux détecteurs dudit groupement de détecteurs (6) sont disposés le long d'un axe s'étendant globalement transversalement par rapport audit trajet en forme d'arc du support mobile afin de détecter lesdits points de périmètre de l'objet lorsqu'il se déplace le long dudit trajet en forme d'arc.</claim-text></claim>
<claim id="c-fr-01-0002" num="0002">
<claim-text>Dispositif de traitement d'objet comportant un système de positionnement d'objet selon la revendication 1, caractérisé en ce que le dispositif comporte une chambre de transfert centrale (2), une pluralité de chambres périphériques (3a, 3b,...) positionnées autour de la périphérie de ladite chambre de transfert centrale, ledit support d'objet mobile (4a) étant mobile à l'intérieur de ladite chambre de transfert centrale le long dudit trajet<!-- EPO <DP n="50"> --> en forme d'arc (66) entre lesdites chambres périphériques, pour charger, déplacer et décharger ledit objet (5) vers et depuis lesdites chambres périphériques, le système de positionnement d'objet comprenant de plus des moyens pour délivrer des signaux de référence de support d'objet indicatifs de la position d'un point de référence de support d'objet (4c), et lesdits détecteurs (6) étant déclenchés par les bords avant et arrière de l'objet mobile (5) lorsqu'il passe au niveau de ceux-ci de façon à développer des signaux de position d'objet correspondants à partir desquels un point de référence de position d'objet peut être déterminé, et des moyens étant présents, ceux-ci réagissant aux signaux de référence de support d'objet et aux signaux de position d'objet, et agissant de façon à calculer l'emplacement de la position de l'objet par rapport au support d'objet, et agissant également de façon à déplacer ledit support d'objet vers une position décalée correspondante par rapport audit emplacement présélectionné de façon à positionner ledit objet dans ledit emplacement présélectionné dans l'une des chambres périphériques.</claim-text></claim>
<claim id="c-fr-01-0003" num="0003">
<claim-text>Dispositif de traitement selon la revendication 2, caractérisé en ce que l'objet comprend une plaquette de semiconducteurs globalement circulaire (5), dont le point central (29) est concentrique audit point de référence de support d'objet (4c) lorsque ladite plaquette est parfaitement positionnée sur ledit support d'objet (4).</claim-text></claim>
<claim id="c-fr-01-0004" num="0004">
<claim-text>Dispositif de traitement selon la revendication 3, caractérisé en ce que les moyens réagissant auxdits signaux de position d'objet comprennent un système d'ordinateur (12) programme pour commander le transfert de ladite plaquette d'une chantre à l'autre.</claim-text></claim>
<claim id="c-fr-01-0005" num="0005">
<claim-text>Dispositif de traitement selon la revendication 4, caractérisé en ce que le système d'ordinateur comprend des moyens logiques de commande (12) agissant de façon à déterminer les points de référence de position d'objet et de support d'objet et à calculer l'emplacement relatif de<!-- EPO <DP n="51"> --> ladite plaquette par rapport audit support d'objet, des moyens (13, 15, 16a, 16b, 17a, 17b) pour commander les déplacements du support d'objet, et des moyens (11) pour établir une interface entre les moyens logiques de commande et le groupement de détecteurs.</claim-text></claim>
<claim id="c-fr-01-0006" num="0006">
<claim-text>Dispositif de traitement selon l'une quelconque des revendications 2 à 5, caractérisé en ce que lesdits moyens délivrant des signaux de référence de support d'objet comprennent une mémoire préenregistrée de façon à contenir lesdits signaux de référence de support d'objet.</claim-text></claim>
<claim id="c-fr-01-0007" num="0007">
<claim-text>Dispositif de traitement selon les revendications 2 à 6, caractérisé en ce que ledit support d'objet comprend une lame allongée (4) comprenant un attribut (4c) détectable par lesdits détecteurs.</claim-text></claim>
<claim id="c-fr-01-0008" num="0008">
<claim-text>Dispositif de traitement selon la revendication 7, caractérisé en ce que l'attribut détectable du support d'objet (4) comprend un orifice disposé de façon centrale (4c) disposé dans ladite lame comportant des bords détectables par lesdits détecteurs (6).</claim-text></claim>
<claim id="c-fr-01-0009" num="0009">
<claim-text>Dispositif de traitement selon l'une quelconque des revendications 3 à 8, caractérisé en ce que ledit emplacement présélectionné identifie une position de plaquette dans l'une des chambres périphériques.</claim-text></claim>
<claim id="c-fr-01-0010" num="0010">
<claim-text>Dispositif de traitement selon l'une quelconque des revendications 2 à 9, caractérisé en ce que lesdits détecteurs (6) sont disposés à l'extérieur de ladite chambre de transfert centrale (2).</claim-text></claim>
<claim id="c-fr-01-0011" num="0011">
<claim-text>Procédé pour positionner avec précision un objet comportant un point central d'un premier emplacement à un deuxième emplacement présélectionné connu en détectant la position relative de l'objet par rapport à un support d'objet mobile sur lequel ledit objet est supporté, et qui est mobile selon un trajet en forme d'arc prédéterminé autour d'un axe fixe et le long de trajets linéaires s'étendant radialement par rapport à l'axe autour du trajet en forme d'arc entre lesdits premier et deuxième<!-- EPO <DP n="52"> --> emplacements, la position dudit support d'objet mobile à tous moments étant connue, et la position dudit objet par rapport audit support d'objet mobile étant initialement inconnue ; le procédé comprenant les étapes de disposition d'un groupement de détecteurs comprenant au moins deux détecteurs montés le long d'un axe globalement transversal au trajet, de détection de points de périmètre le long du périmètre de l'objet en déplaçant le support d'objet mobile le long du trajet, de façon à déclencher par conséquent les détecteurs afin de générer des signaux d'objet à partir desquels la position dudit objet peut être déterminée par rapport à la position connue dudit support d'objet mobile, de calcul de la position de l'objet par rapport à la position connue du support d'objet mobile à partir des signaux d'objet, et de déplacement dudit support d'objet mobile et dudit objet supporté sur celui-ci vers le deuxième emplacement présélectionné de telle sorte que ledit objet coïncide avec ledit emplacement sélectionné ; caractérisé en ce que le support d'objet tourne autour dudit axe fixe de façon à déplacer l'objet le long dudit trajet en forme d'arc durant le déplacement dudit premier audit deuxième emplacement, et en ce que lesdits détecteurs sont montés le long d'un axe globalement transversal au trajet en forme d'arc de façon à détecter lesdits points de périmètre de l'objet lorsqu'il se déplace le long du trajet en forme d'arc.</claim-text></claim>
<claim id="c-fr-01-0012" num="0012">
<claim-text>Procédé selon la revendication 11, caractérisé en ce que l'étape de calcul de la position de l'objet comprend les étapes suivantes :
<claim-text>a. la détermination d'au moins trois points de coordonnées polaires de l'objet à partir de la position connue du support d'objet mobile et des signaux de position d'objet ;</claim-text>
<claim-text>b. la conversion desdits points de coordonnées polaires en points de cordonnées cartésiennes ;</claim-text>
<claim-text>c. le calcul des pentes de lignes bissectrices<!-- EPO <DP n="53"> --> perpendiculaires imaginaires vis-à-vis de lignes imaginaires reliant au moins deux paires des points de coordonnées cartésiennes ;</claim-text>
<claim-text>d. le calcul des points milieux des lignes imaginaires reliant lesdites paires desdits points de coordonnées cartésiennes ;</claim-text>
<claim-text>e. le calcul des points d'intersection des lignes bissectrices perpendiculaires à partir des pentes et des points milieux calculés ;</claim-text>
<claim-text>f. la répétition des étapes c., d., et e., pour toutes les paires desdits points de coordonnées cartésiennes ;</claim-text>
<claim-text>g. la comparaison des points d'intersection calculés à une plage numérique spécifiée ; et</claim-text>
<claim-text>h. le calcul de la moyenne des points d'intersection rentrant à l'intérieur de la plage numérique spécifiée.</claim-text></claim-text></claim>
<claim id="c-fr-01-0013" num="0013">
<claim-text>Procédé selon la revendication 11 ou la revendication 12 pour déterminer la position dudit support d'objet mobile ayant une largeur connue, un attribut détectable et une position de référence d'origine par rapport à un emplacement connu dans un système de traitement, ledit support mobile étant disposé à l'intérieur d'une chambre de transfert comportant une pluralité de chambres positionnées autour de la périphérie de ladite chambre de transfert et étant mobile entre lesdites chambres de traitement le long dudit trajet en forme d'arc, comprenant les étapes de détection de la position d'au moins l'un desdits détecteurs pour délivrer des signaux de position de détecteur, de détection de la position du support d'objet mobile pour générer des signaux de position de support d'objet, et de calcul de la position du support d'objet par rapport à l'emplacement connu à partir desdits signaux de position de détecteur et desdits signaux de position de support d'objet.</claim-text></claim>
<claim id="c-fr-01-0014" num="0014">
<claim-text>Procédé selon la revendication 13, caractérisé en ce que l'étape de détection de la position dudit au moins un parmi lesdits détecteurs comprend les étapes de<!-- EPO <DP n="54"> --> détection d'un bord avant et d'un bord arrière du support d'objet mobile par déplacement dudit support d'objet mobile le long du trajet en forme d'arc, de façon à déclencher par conséquent le détecteur afin de générer des signaux représentant les positions de point desdits bords avant et arrière détectés, d'enregistrement desdites positions de point en termes de valeurs, de calcul de la moyenne desdites valeurs agissant de façon à donner un angle de rotation de ladite position de détecteur, et de calcul du rayon de la position de détecteur à partir de l'angle de rotation calculé et de la largeur connue du support d'objet mobile.</claim-text></claim>
<claim id="c-fr-01-0015" num="0015">
<claim-text>Procédé selon la revendication 14, caractérisé en ce que l'étape de détection de la position du support d'objet mobile comprend les étapes de rotation du support d'objet mobile de telle sorte que la position dudit support d'objet mobile soit coïncidente à l'axe radial de la position de détecteur, d'extension du support d'objet mobile le long de l'axe radial du détecteur, de façon à faire détecter par ledit détecteur un bord avant et un bord arrière dudit attribut détectable dudit support d'objet mobile et de façon à agir pour générer des signaux de position associés, d'enregistrement desdits signaux de position en termes de valeurs, et de calcul d'un point le long dudit trajet en forme d'arc dudit support égal auxdites valeurs comme valeur de positionnement relative pour déplacer ledit support d'objet par rapport audit emplacement sélectionné.</claim-text></claim>
</claims><!-- EPO <DP n="55"> -->
<drawings id="draw" lang="en">
<figure id="f0001" num=""><img id="if0001" file="imgf0001.tif" wi="157" he="224" img-content="drawing" img-format="tif"/></figure><!-- EPO <DP n="56"> -->
<figure id="f0002" num=""><img id="if0002" file="imgf0002.tif" wi="157" he="227" img-content="drawing" img-format="tif"/></figure><!-- EPO <DP n="57"> -->
<figure id="f0003" num=""><img id="if0003" file="imgf0003.tif" wi="85" he="187" img-content="drawing" img-format="tif"/></figure><!-- EPO <DP n="58"> -->
<figure id="f0004" num=""><img id="if0004" file="imgf0004.tif" wi="157" he="244" img-content="drawing" img-format="tif"/></figure><!-- EPO <DP n="59"> -->
<figure id="f0005" num=""><img id="if0005" file="imgf0005.tif" wi="138" he="246" img-content="drawing" img-format="tif"/></figure><!-- EPO <DP n="60"> -->
<figure id="f0006" num=""><img id="if0006" file="imgf0006.tif" wi="156" he="211" img-content="drawing" img-format="tif"/></figure><!-- EPO <DP n="61"> -->
<figure id="f0007" num=""><img id="if0007" file="imgf0007.tif" wi="84" he="246" img-content="drawing" img-format="tif"/></figure>
</drawings>
</ep-patent-document>
