[0001] This invention relates to bio-optical sensors, that is light-sensitive semiconductor
devices which detect and measure light emitted by the reaction of a reagent with a
biological sample.
[0002] In known bio-optical sensors, the reaction takes place on a surface of the semiconductor
device which is an image surface divided into pixels. The light produced by reactions
of this nature is small, and accordingly the signal produced by any pixel of the device
is also small. The signal is frequently less than other effects such as dark current
(leakage current) from the pixel, and voltage offsets. Therefore a calibration/cancellation
scheme is necessary to increase the sensitivity of the system.
[0003] In the related field of solid state image sensors there are a number of known techniques
for achieving calibration. In image sensors, it is necessary to have a continuous
image plane on which the image is formed. Calibration techniques involve either the
use of dark frame cancellation or the use of special calibration pixels.
[0004] In dark frame cancellation, a dark reference frame is taken and the resulting signal
output is subtracted from the image frame. The dark reference frame is usually taken
with the same exposure (integration time as the image but with no light impinged on
the sensor, either by use of a shutter or by turning off the scene illumination.
[0005] When calibration pixels are used, these are provided at the edge of the sensor, usually
in the form of a single row or column, since it is necessary to have a continuous
image surface.
[0006] In current bio-optical sensors, a dark image is acquired before the analyte and reagents
are deposited on the sensor, and this calibration image is used during detection and
processing of the photo-signal. This means that there is a time difference between
the acquisition of the dark reference frame and the detection and processing of the
sensor signal. During this time there may be changes in the conditions on the device,
e.g. operating voltage and temperature may change due to lower battery, change in
ambient temperature, or self-heating due to power dissipation, and thus the calibration
signal is not an accurate representation of the dark signal at the relevant time.
[0007] An object of the present invention is to provide a bio-optical sensor having a more
accurate calibration signal. This increases system sensitivity and enables the system
to function with less analyte, less reagent, or in a shorter time.
[0008] Accordingly, the invention provides a bio-optical sensor comprising a semiconductor
substrate having an image plane formed as an array of pixels, the image plane being
adapted to receive thereon an analyte and a reagent which reacts with the analyte
to produce light; in which the pixels comprise sensing pixels which generate signals
which are a function of light emitted by said reaction and calibration pixels which
are not exposed to said light; and in which the calibration pixels are interleaved
with the sensing pixels.
[0009] Preferred features and other advantages of the invention will be apparent from the
following description and from the claims.
[0010] Embodiments of the invention will now be described, by way of example only, with
reference to the drawings, in which:
Figure 1 is a schematic plan view of the image area of one embodiment of the invention;
Figures 2, 3 and 4 are similar views of further embodiments;
Figure 5 illustrates a general case of the image area; and
Figure 6 is a graph showing the relationship between the size of pixel blocks and
spatial efficiency.
[0011] Figure 1 shows the simplest form of the invention in which the image surface is divided
into sensing pixels 10 and calibration pixels 12 which are interleaved on a 1:1 basis
or chequerboard fashion. Each of the pixels 10, 12 is an imaging pixel of well known
type, such as a 3-transistor or 4-transistor pixel based on CMOS technology. The calibration
pixels 12 are shielded from light by a suitable mask, which may for example be printed
on top of the array or may be formed by selective metallisation during fabrication.
Where a metal mask is used, this is preferably as a layer separated from the readout
electronics, to reduce parasitic capacitance.
[0012] Alternatives to metallisation to form the opaque layer include silicided gate oxide,
and superposition of colour filters, i.e. overlaying red green and blue filters to
give black.
[0013] It is preferable that the pixels situated at the edge of the sensor are not used,
either for sensing or calibration. These have neighbouring pixels on less than four
sides, whereas the other pixels have neighbours in four sides. Also, practical issues
with the fabrication processing of the sensor cause variations in the size of the
patterned features which will be exacerbated at the edges. These factors change the
analogue performance of the 'border' pixels at the edges, and thus the border pixels
are best ignored.
[0014] In the arrangement of Figure 1 it will almost certainly be necessary in practical
terms to cover the whole of the sensing surface with analyte and reagent, since it
would be difficult to physically contain a liquid system to single pixel areas. This
has the disadvantage that only 50% of the analyte and reagent is available to the
sensing pixels, while the quantities of both are usually limited by problems obtaining
sample and the costs of reagent.
[0015] This problem can be addressed by dividing the surface into sensitive regions and
calibration regions, giving the possibility of applying the analyte and reagent only
to the sensitive regions.
[0016] Figure 2 shows an interleaving scheme using 2x2 blocks of pixels. However, interleaving
in blocks does pose problems. It is reasonable to assume that an edge pixel of a block
will have a response significantly different to interior pixels and should be discarded.
Thus, the Figure 2 array may not be practicable. Figure 3 shows an array interleaved
in blocks of 3x3 in which, if the edge pixels are not used, only 1/9 of the surface
area will be effective. Figure 4 shows 4x4 blocks, in which 1/4 of the area will be
effective if edge pixels are unused.
[0017] Figure 5 shows the general case where the sensor has X (horizontally) x Y (vertically)
pixels, arranged in blocks of MxN pixels. Each block therefore has (M-2) x (N-2) useful
pixels. The graph of Figure 6 shows the percentage of useful pixels for different
block sizes, assuming square blocks with M=N.
[0018] If we define

then Figure 6 shows that with block sizes of 6x6 or less the spatial efficiency is
less than 50%, i.e. worse than the simple 1x1 interleave form. For 7x7 blocks, spatial
efficiency is greater than 50%, i.e. there is an improvement over the 1x1 form.
[0019] The graph also illustrates that the graph shows diminishing returns. With 20x20 pixels,
the efficiency is 80% and increases only slowly from this point. the most useful block
size is likely to lie in the range of 20-30 pixels.
[0020] The foregoing embodiments show the blocks of sensing and calibration pixels distributed
in a common-centred manner, that is in such a way that the "centre of gravity" of
the two types is in a common location. This is the preferred manner, although other
patterns of interleaving may be used.
[0021] Likewise, the preferred embodiments have equal numbers of sensing and calibration
pixels, but the proportion of calibration pixels could be reduced while still benefiting
from the underlying concept.
[0022] A typical method of operating the sensor is as follows.
1. Obtain image with no analyte/reagent present and no light produced: "Idark(x,y)"
2. Separate the image data into two images, pixel data "Pdark(x,y)" and calibration
data "Cdark(x,y)"
3. Add the analyte/reagent and obtain an image with light Ilight(x,y)"
4. Separate this into two images, pixel data "Plight(x,y)" and calibration data "Clight(x,y)"
5. The uncompensated image is then calculated by Plight(x,y) - Pdark(x,y) (on a pixel
basis)
6. The compensation signal is calculated from the calibration pixels as fnCal(Clight(x,y),Cdark(x,y))
7. Compute compensated image

[0023] In the simplest case, fnCal could be linear, i.e. fnCal(x,y) = Cdark(x,y)/Clight(x,y).
This is suitable where the error source changes linearly.
[0024] However, the main use for this technique is to correct for temperature where the
dark current rises exponentially with temperature. The calibration function can represent
this, e.g. fnCal(x,y) = log(Cdark(x,y)/Clight(x,y)).
[0025] Depending on the design of the sense node, other errors may be significant and require
a change to the calibration function. This can be computed arithmetically or determined
empirically and incorporated in a look-up table.
1. A bio-optical sensor comprising a semiconductor substrate having an image plane formed
as an array of pixels, the image plane being adapted to receive thereon an analyte
and a reagent which reacts with the analyte to produce light; in which the pixels
comprise sensing pixels which generate signals which are a function of light emitted
by said reaction and calibration pixels which are not exposed to said light; and in
which the calibration pixels are interleaved with the sensing pixels.
2. A sensor according to claim 1, in which there are equal numbers of calibration pixels
and sensing pixels.
3. A sensor according to claim 2, in which the pixels are interleaved alternately.
4. A sensor according to claim 1 or claim 2, in which the pixels are arranged in blocks
of calibration pixels and blocks of sensing pixels, the blocks being interleaved.
5. A sensor according to claim 4, in which the signals from pixels at the edge of a block
are not used.
6. A sensor according to claim 4 or claim 5, in which each block comprises between 20
and 30 pixels.
7. A sensor according to any preceding claim, in which the signals from pixels at the
edge of the array are not used.
8. A sensor according to any preceding claim, in which the calibration pixels are overlaid
with an opaque substance.
9. A sensor according to claim 8, in which the opaque substance is formed by a metallised
layer.
10. A sensor according to any of claims 1 to 7, in which the surface of the image plane
is divided such that the analyte and the reagent contact the sensing pixels but do
not contact the calibration pixels.