(19)
(11)EP 4 238 484 A1

(12)EUROPEAN PATENT APPLICATION
published in accordance with Art. 153(4) EPC

(43)Date of publication:
06.09.2023 Bulletin 2023/36

(21)Application number: 21885560.9

(22)Date of filing:  02.02.2021
(51)International Patent Classification (IPC): 
A61B 3/13(2006.01)
(52)Cooperative Patent Classification (CPC):
A61B 3/125; A61B 3/13; A61B 3/15; A61F 9/007; A61B 3/10
(86)International application number:
PCT/JP2021/003639
(87)International publication number:
WO 2022/091435 (05.05.2022 Gazette  2022/18)
(84)Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR
Designated Extension States:
BA ME
Designated Validation States:
KH MA MD TN

(30)Priority: 27.10.2020 US 202063106087 P

(71)Applicant: TOPCON CORPORATION
Tokyo 174-8580 (JP)

(72)Inventors:
  • YAMADA Kazuhiro
    Tokyo 174-8580 (JP)
  • OOMORI Kazuhiro
    Tokyo 174-8580 (JP)
  • FUKUMA Yasufumi
    Tokyo 174-8580 (JP)

(74)Representative: Gagel, Roland 
Patentanwalt Dr. Roland Gagel Landsberger Strasse 480a
81241 München
81241 München (DE)

  


(54)OPHTHALMOLOGICAL OBSERVATION DEVICE, METHOD FOR CONTROLLING SAME, PROGRAM, AND STORAGE MEDIUM


(57) An ophthalmological observation device (1) according to an exemplary embodiment of the present invention comprises a moving image generation unit (30, 40), an analysis unit (211), a target location determination unit (212), and a display control unit (200). The moving image generation unit images a subject eye to generate a moving image. The analysis unit analyzes a still image, which is included in the moving image generated by the moving image generation unit, so as to detect an image of a given site in the subject eye. The target location determination unit determines a target location for a predetermined treatment at least on the basis of the image of the given site which has been detected by the analysis unit. The display control unit displays, on a display device (3), the moving image which has been generated by the moving image generation unit and target location information which indicates the target location determined by the target location determination unit.




Description

TECHNICAL FIELD


(CROSS-REFERENCE TO RELATED APPLICATIONS)



[0001] The present application claims priority to U.S. Provisional Patent Application Serial No. 63/106,087, filed October 27, 2020, entitled "APPARATUS AND METHOD FOR OPHTHALMIC OBSERVATION", the entirety of which is incorporated herein by reference.

[0002] The present disclosure relates generally to an ophthalmic observation apparatus, a method of controlling the same, a program, and a recording medium.

BACKGROUND OF THE INVENTION



[0003] An ophthalmic observation apparatus is an apparatus for observing an eye of a patient (which will be referred to as a subject's eye hereinafter). Ophthalmic observation is conducted to grasp the condition of the subject's eye in various situations such as examination, surgery, and treatment.

[0004] Conventional ophthalmic observation apparatuses are configured to provide a user with a magnified image formed by an objective lens, a variable magnification optical system, etc. via an eyepiece. In recent years, some ophthalmic observation apparatuses are configured to photograph a magnified image formed by an objective lens, a variable magnification optical system, etc. with an image sensor, and display the photographed image obtained (such an ophthalmic observation apparatus will be referred to as a digital ophthalmic observation apparatus). Examples of such digital ophthalmic observation apparatuses include surgical microscopes, slit lamp microscopes, and fundus cameras (retinal cameras). In addition, various kinds of ophthalmic examination apparatuses such as refractometers, keratometers, tonometers, specular microscopes, wavefront analyzers, and microperimeters are also provided with the function of the digital ophthalmic observation apparatus.

[0005] Furthermore, some ophthalmic observation apparatuses of recent years use optical scanning (scanning-type ophthalmic observation apparatus). Examples of such ophthalmic observation apparatuses include scanning laser ophthalmoscopes (SLOs), and optical coherence tomography (OCT) apparatuses.

[0006] Generally, an ophthalmic observation apparatus is configured to provide a moving image of a subject's eye to a user (e.g., a health professional (health care practitioner) such as a doctor). A typical digital ophthalmic observation apparatus is configured to perform photographing of a moving image using infrared light and/or visible light as illumination light, and real-time display of the moving image obtained by the moving image photography. On the other hand, a typical scanning-type ophthalmic observation apparatus is configured to perform data collection (data acquisition) by repetitive optical scanning, real-time image reconstruction based on datasets sequentially collected, and real-time moving image display of images sequentially reconstructed. The real-time moving image provided in these ways is referred to as an observation image or a live image.

[0007] An ophthalmic observation apparatus capable of providing a real-time moving image is also used in surgery. There are various types of ophthalmic surgery and many ophthalmic surgeries are performed by inserting instruments into the eye. Instrument insertion requires an incision of an ocular tissue. For example, cataract surgery to replace the crystalline lens with an intraocular lens (IOL) typically involves corneal incision (keratotomy, sclerocorneal incision), crystalline lens anterior capsule incision (incision of the anterior capsule of the crystalline lens, referred to as capsulotomy, anterior capsulectomy, or the like) (e.g., Continuous Curvilinear Capsulorhexis (CCC)), and so forth. Further, in refractive surgery to implant an intraocular contact lens (ICL; also known as a phakic IOL) in the anterior chamber, corneal incision is usually made. In addition, minimally invasive glaucoma surgery (MIGS) involves incision of the trabecular meshwork and insertion of a stent into the trabecular meshwork. In vitreoretinal surgery (vitrectomy), in which a light guide (illumination) and various kinds of instruments are used, conjunctival incision and sclerotomy (scleral incision) are generally made.

[0008] Such incisions in ocular tissue need to be made at appropriate locations. However, it is difficult to determine an appropriate incision target position since there are few landmarks in the eye, making a mark on the eye is difficult, and the eye may move during surgery. The same applies to treatments or procedures other than incisions.

PRIOR ART DOCUMENTS


PATENT DOCUMENTS



[0009] PATENT DOCUMENT 1: Japanese Unexamined Patent Application Publication No. 2019-162336

BRIEF SUMMARY OF THE INVENTION


PROBLEM TO BE SOLVED BY THE INVENTION



[0010] One object of the present disclosure is to provide a novel method or technique for facilitating ophthalmic observation.

MEANS FOR SOLVING THE PROBLEM



[0011] Some aspect examples are an ophthalmic observation apparatus for observing a subject's eye that includes: a moving image generating unit configured to generate a moving image by photographing the subject's eye; an analyzing processor configured to analyze a still image included in the moving image to detect an image of a predetermined site of the subject's eye; a target position determining processor configured to determine a target position for a predetermined treatment based at least on the image of the predetermined site detected by the analyzing processor; and a display controller configured to display, on a display device, the moving image and target position information that represents the target position.

[0012] In the ophthalmic observation apparatus of some aspect examples, the target position determining processor is configured to determine the target position based at least on the image of the predetermined site detected by the analyzing processor and measurement data of the subject's eye acquired in advance.

[0013] In some aspect examples, the ophthalmic observation apparatus further includes a measuring unit configured to acquire measurement data of the subject's eye, and the target position determining processor is configured to determine the target position based at least on the image of the predetermined site detected by the analyzing processor and the measurement data acquired by the measuring unit.

[0014]  In the ophthalmic observation apparatus of some aspect examples, the target position determining processor includes an assessing processor configured to assess influence of the predetermined treatment on the subject's eye based on the measurement data acquired by the measuring unit, and the target position determining processor is configured to determine the target position based at least on the image of the predetermined site detected by the analyzing processor and a result obtained by the assessing processor.

[0015] In the ophthalmic observation apparatus of some aspect examples, the target position determining processor is configured to determine, as the target position, a position away from the image of the predetermined site detected by the analyzing processor by a predetermined distance.

[0016] In the ophthalmic observation apparatus of some aspect examples, when a first treatment and a second treatment are applied to the subject's eye, the analyzing processor performs first detection and second detection, the first detection including analysis of a first still image generated by the moving image generating processor prior to the first treatment to detect an image of a first site of the subject's eye, and the second detection including analysis of a second still image generated by the moving image generating processor prior to the second treatment to detect an image of a second site of the subject's eye, the target position determining processor performs first determination and second determination, the first determination including determination of a first target position that is a target position for the first treatment based at least on the image of the first site, and the second determination including determination of a second target position that is a target position for the second treatment based at least on the image of the second site, and the display controller performs a first display control and a second display control, the first display control including a control for displaying the moving image and first target position information that represents the first target position for the first treatment, and the second display control including a control for displaying the moving image and second target position information that represents the second target position for the second treatment.

[0017] In some aspect examples, the ophthalmic observation apparatus further includes a treatment change detecting unit configured to detect a change from the first treatment to the second treatment.

[0018]  Some aspect examples are a method of controlling an ophthalmic observation apparatus that includes a processor and generates a moving image of a subject's eye, the method including: causing the processor to analyze a still image included in the moving image to detect an image of a predetermined site of the subject's eye; causing the processor to determine a target position for a predetermined treatment based at least on the image of the predetermined site detected; and causing the processor to display, on a display device, the moving image and target position information that represents the target position.

[0019] Some aspect examples are a program configured to cause a computer to execute the method of some aspect examples.

[0020] Some aspect examples are a computer-readable non-transitory recording medium storing the program of some aspect examples.

EFFECT OF THE INVENTION



[0021] These aspect examples allow ophthalmic surgery to be facilitated.

BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING



[0022] 

FIG. 1 is a diagram illustrating an example of a configuration of an ophthalmic observation apparatus (ophthalmic surgical microscope) according to an embodiment example.

FIG. 2 is a diagram illustrating an example of a configuration of an ophthalmic observation apparatus according to an embodiment example.

FIG. 3 is a diagram illustrating an example of a configuration of an ophthalmic observation apparatus according to an embodiment example.

FIG. 4 is a diagram illustrating an example of a configuration of an ophthalmic observation apparatus according to an embodiment example.

FIG. 5 is a diagram illustrating an example of a configuration of an ophthalmic observation apparatus according to an embodiment example.

FIG. 6 is a diagram illustrating an example of a configuration of an ophthalmic observation apparatus according to an embodiment example.

FIG. 7 is a diagram illustrating an example of a configuration of an ophthalmic observation apparatus according to an embodiment example.

FIG. 8A is a flowchart illustrating an example of processing performed by an ophthalmic observation apparatus according to an embodiment example.

FIG. 8B is a flowchart illustrating an example of processing performed by an ophthalmic observation apparatus according to an embodiment example.

FIG. 9A is a diagram for describing an example of processing performed by an ophthalmic observation apparatus according to an embodiment example.

FIG. 9B is a diagram for describing an example of processing performed by an ophthalmic observation apparatus according to an embodiment example.

FIG. 9C is a diagram for describing an example of processing performed by an ophthalmic observation apparatus according to an embodiment example.

FIG. 9D is a diagram for describing an example of processing performed by an ophthalmic observation apparatus according to an embodiment example.

FIG. 9E is a diagram for describing an example of processing performed by an ophthalmic observation apparatus according to an embodiment example.

FIG. 10A is a diagram for describing an example of processing performed by an ophthalmic observation apparatus according to an embodiment example.

FIG. 10B is a diagram for describing an example of processing performed by an ophthalmic observation apparatus according to an embodiment example.

FIG. 10C is a diagram for describing an example of processing performed by an ophthalmic observation apparatus according to an embodiment example.

FIG. 11 is a diagram for describing an example of processing performed by an ophthalmic observation apparatus according to an embodiment example.


DETAILED DESCRIPTION OF THE INVENTION



[0023] Some aspect examples of an ophthalmic observation apparatus, a method of controlling the same, a program, and a recording medium according to some embodiments will be described in detail with reference to the drawings. It should be noted that any of the matters and items described in the documents cited in the present disclosure and any known techniques and technologies may be combined with any of the aspect examples.

[0024] The ophthalmic observation apparatus according to some aspect examples is used in medical practice (healthcare practice) such as surgery, examination, and treatment, in order to grasp (understand, recognize, find) the state of the subject's eye. The ophthalmic observation apparatus of the aspect examples described herein is mainly a surgical microscope system. However, ophthalmic observation apparatuses of embodiments are not limited to surgical microscope systems. For example, the ophthalmic observation apparatus of some aspect examples may be any of a slit lamp microscope, a fundus camera, a refractometer, a keratometer, a tonometer, a specular microscope, a wavefront analyzer, a microperimeter, an SLO, and an OCT apparatus. Also, the ophthalmic observation apparatus of some aspect examples may be a system that includes any one or more of these apparatus examples. In a wider sense, the ophthalmic observation apparatus of some aspect examples may be any type of ophthalmic apparatus having an observation function.

[0025] A target ocular site for observation (ocular site to be observed, ocular site subject to observation) by using the ophthalmic observation apparatus may be any site of the subject's eye, and may be any site of the anterior eye segment and/or any site of the posterior eye segment. Examples of the observation target sites of the anterior eye segment include cornea, iris, anterior chamber, corner angle, crystalline lens, ciliary body, and zonule of Zinn. Examples of the observation target sites of the posterior eye segment include retina, choroid, sclera, and vitreous body. The observation target site is not limited to tissues of an eye ball, and may be any site subject to be observed in ophthalmic medical practice (and/or medical practice in other medical fields) such as eyelid, meibomian gland, and orbit (eye socket, eye pit).

[0026] The ophthalmic observation apparatus is used, for example, in surgery in which an artificial object is inserted into the eye. The artificial object may be a freely selected instrument (device) that is implanted (placed) in the eye. Examples of such an artificial object include an intraocular lens, an intraocular contact lens, and an MIGS device (stent). Note that the artificial objects may be a freely selected medical instrument such as a light guide, a surgical instrument, an examination instrument, or a treatment instrument.

[0027] At least one or more of the functions of the elements described in the present disclosure are implemented by using a circuit configuration (or circuitry) or a processing circuit configuration (or processing circuitry). The circuitry or the processing circuitry includes any of the followings, all of which are configured and/or programmed to execute at least one or more functions disclosed herein: a general purpose processor, a dedicated processor, an integrated circuit, a central processing unit (CPU), a graphics processing unit (GPU), an application specific integrated circuit (ASIC), a programmable logic device (e.g., a simple programmable logic device (SPLD), a complex programmable logic device (CPLD), or a field programmable gate array (FPGA)), a conventional circuit configuration or circuitry, and any combination of these. A processor is considered to be processing circuitry or circuitry that includes a transistor and/or another circuitry. In the present disclosure, circuitry, a unit, a means, or a term similar to these is hardware that executes at least one or more functions disclosed herein, or hardware that is programmed to execute at least one or more functions disclosed herein. Hardware may be the hardware disclosed herein, or alternatively, known hardware that is programmed and/or configured to execute at least one or more functions described herein. In the case where the hardware is a processor, which may be considered as a certain type of circuitry, then circuitry, a unit, a means, or a term similar to these is a combination of hardware and software. In this case, the software is used to configure the hardware and/or the processor.

< Ophthalmic observation apparatus >



[0028] FIG. 1 shows the configuration of the ophthalmic observation apparatus of some aspect examples.

[0029] The ophthalmic observation apparatus 1 (surgical microscope system, operation microscope system) according to the present embodiment includes the operation device 2, the display device 3, and the surgical microscope (operation microscope) 10. In some aspects, the surgical microscope 10 may include at least one of the operation device 2 and the display device 3. In some aspects, the display device 3 may not be included in the ophthalmic observation apparatus 1. In other words, the display device 3 may be a peripheral device of the ophthalmic observation apparatus 1.

< Operation device 2 >



[0030] The operation device 2 includes an operation device and/or an input device. For example, the operation device 2 may include any of a button, a switch, a mouse, a keyboard, a trackball, an operation panel, a dial, and so forth. Typically, the operation device 2 includes a foot switch, like standard (general, normal, usual) ophthalmic surgical microscopes. Further, the operation device 2 may also be configured in such a manner that the user performs operations using voice recognition, line-of-sight (gaze) input, or like input technologies.

< Display device 3 >



[0031] The display device 3 displays an image of the subject's eye acquired by the surgical microscope 10. The display device 3 includes a display device such as a flat panel display. The display device 3 may include any of various kinds of display devices such as a touch panel. The display device 3 of some typical aspects includes a display device with a large screen. The display device 3 includes one or more display devices. In the case where the display device 3 includes two or more display devices, for example, one may be a display device with a relatively large screen and another one of the other(s) may be a display device with a relatively small screen. Also, a configuration may be employed in which a plurality of display regions is provided in one display device to display a plurality of pieces of information.

[0032] The operation device 2 and the display device 3 do not have to be separate devices. For example, a device having both the operation function and the display function, such as a touch panel, may be used as the display device 3. In such a case, the operation device 2 may include a computer program in addition to the touch panel. A content of an operation made by the operation device 2 is sent to a processor (not shown in the drawings) as an electric signal. Further, a graphical user interface (GUI) displayed on the display device 3 and the operation device 2 may be used to conduct operations (instructions) and input information. In some aspects, the functions of the operation device 2 and the display device 3 may be implemented with a touch screen.

< Surgical microscope 10 >



[0033] The surgical microscope 10 is used for observation of the eye of a patient (subject's eye) in the supine position. The surgical microscope 10 performs photographing of the subject's eye to generate digital image data. In particular, the surgical microscope 10 generates a moving image of the subject's eye. The moving image (video, movie) generated by the surgical microscope 10 is transmitted to the display device 3 through a wired and/or wireless signal path and displayed on the display device 3. The user (e.g., surgeon) can carry out surgery while observing the subject's eye through the displayed image. In addition to such observation through the displayed image, the surgical microscope 10 of some aspects may also be capable of providing observation through an eyepiece as in conventional technology.

[0034] In some aspects, the surgical microscope 10 includes a communication device for transmitting and receiving electrical signals to and from the operation device 2. The operation device 2 receives an operation (instruction) performed by the user and generates an electric signal (operation signal) corresponding to the operation. The operation signal is transmitted to the surgical microscope 10 through a wired and/or wireless signal path. The surgical microscope 10 executes processing corresponding to the operation signal received.

< Optical system of surgical microscope 10 >



[0035] An example of the configuration of the optical system of the surgical microscope 10 will be described. Below, directions are defined as follows, for convenience of description: the z direction is defined to be the optical axis direction (direction along the optical axis) of the objective lens (the z direction is, for example, the vertical direction, the up and down direction during surgery); the x direction is defined to be a predetermined direction perpendicular to the z direction (the x direction is, for example, the horizontal direction during surgery, and the left and right direction for the surgeon and the patient during surgery); and the y direction is defined to be the direction perpendicular to both the z and x directions (the y direction is, for example, the horizontal direction during surgery, the front and back direction for the surgeon during surgery, and the body axis direction (direction along the body axis) for the patient during surgery).

[0036] In addition, the case where the observation optical system includes a pair of left and right optical systems (optical systems that allow the user to perform binocular observation) will be mainly described below. However, an observation optical system of some other aspects may have an optical system for monocular observation, and it will be understood by those skilled in the art that the configuration described below may be incorporated into the aspects for monocular observation.

[0037] FIG. 2 shows an example of the configuration of the optical system of the surgical microscope 10. FIG. 2 illustrates a schematic top view of the optical system viewed from above (top view) and a schematic side view of the optical system viewed from the side (side view) in association with each other. In order to simplify the illustration, the illumination optical system 30 arranged above the objective lens 20 is omitted in the top view.

[0038] The surgical microscope 10 includes the objective lens 20, the dichroic mirror DM1, the illumination optical system 30, and the observation optical system 40. The observation optical system 40 includes the zoom expander 50, and the imaging camera 60. In some aspects, the illumination optical system 30 or the observation optical system 40 includes the dichroic mirror DM1.

[0039]  The objective lens 20 is arranged to face the subject's eye. The objective lens 20 is arranged such that its optical axis is oriented along the z direction. The objective lens 20 may include two or more lenses.

[0040] The dichroic mirror DM1 couples the optical path of the illumination optical system 30 and the optical path of the observation optical system 40 with each other. The dichroic mirror DM1 is arranged between the illumination optical system 30 and the objective lens 20. The dichroic mirror DM1 transmits illumination light from the illumination optical system 30 and directs the illumination light to the subject's eye through the objective lens 20. Also, the dichroic mirror DM1 reflects return light from the subject's eye incident through the objective lens 20 and directs the return light to the imaging camera 60 of the observation optical system 40.

[0041] The dichroic mirror DM1 coaxially couples the optical path of the illumination optical system 30 and the optical path of the observation optical system 40 with each other. In other words, the optical axis of the illumination optical system 30 and the optical axis of the observation optical system 40 intersect at the dichroic mirror DM1. In the case where the illumination optical system 30 includes an illumination optical system for left eye (31L) and an illumination optical system for right eye (31R) and where the observation optical system 40 includes an observation optical system for left eye 40L and an observation optical system for right eye 40R, the dichroic mirror DM1 coaxially couples the optical path of the illumination optical system for left eye (the first illumination optical system 31L) and the optical path of the observation optical system for left eye 40L with each other, and coaxially couples the optical path of the illumination optical system for right eye (the first illumination optical system 31R) and the optical path of the observation optical system for right eye 40R with each other.

[0042] The illumination optical system 30 is an optical system for illuminating the subject's eye through the objective lens 20. The illumination optical system 30 may be configured to selectively illuminate the subject's eye with two or more pieces of illumination light having different color temperatures. The illumination optical system 30 projects illumination light having a designated color temperature onto the subject's eye under the control of a controller (the controller 200 described later).

[0043]  The illumination optical system 30 includes the first illumination optical systems 31L and 31R and the second illumination optical system 32.

[0044] Each of the optical axis OL of the first illumination optical system 31L and the optical axis OR of the first illumination optical system 31R is arranged with the optical axis of the objective lens 20 in a substantially coaxial manner. Such arrangements enable a coaxial illumination mode and therefore make it possible to obtain a red reflex image (transillumination image) formed by utilizing diffuse reflection from eye fundus. The present aspect allows the red reflex image of the subject's eye to be observed with both eyes.

[0045] The second illumination optical system 32 is arranged in such a manner that its optical axis OS is eccentric (deviated, shifted) from the optical axis of the objective lens 20. The first illumination optical systems 31L and 31R and the second illumination optical system 32 are arranged such that the deviation of the optical axis OS with respect to the optical axis of the objective lens 20 is larger than the deviations of the optical axes OL and OR with respect to the optical axis of the objective lens 20. Such arrangements enable an illumination mode referred to as "angled illumination (oblique illumination)" and therefore enables binocular observation of the subject's eye while preventing ghosting caused by corneal reflection or the like. In addition, the arrangements enable detailed observation of unevenness and irregularities of sites and tissues of the subject's eye.

[0046] The first illumination optical system 31L includes the light source 31 LA and the condenser lens 31LB. The light source 31LA outputs illumination light having a wavelength in the visible range (visible region) corresponding to color temperature of 3000 K (kelvins), for example. The illumination light emitted from the light source 31LA passes through the condenser lens 31LB, passes through the dichroic mirror DM1, passes through the objective lens 20, and then is incident on the subject's eye.

[0047] The first illumination optical system 31R includes the light source 31RA and the condenser lens 31RB. The light source 31RA also outputs illumination light having a wavelength in the visible range corresponding to color temperature of 3000 K, for example. The illumination light emitted from the light source 31RA passes through the condenser lens 31RB, passes through the dichroic mirror DM1, passes through the objective lens 20, and then is incident on the subject's eye.

[0048]  The second illumination optical system 32 includes the light source 32A and the condenser lens 32B. The light source 32A outputs illumination light having a wavelength in the visible range corresponding to a color temperature within the range of 4000 K to 6000 K, for example. The illumination light emitted from the light source 32A passes through the condenser lens 32B, passes through the objective lens 20 without passing through the dichroic mirror DM1, and then is incident on the subject's eye.

[0049] In the present aspect example, the color temperature of the illumination light from the first illumination optical systems 31L and 31R is lower than the color temperature of the illumination light from the second illumination optical system 32. Such a configuration makes it possible to observe the subject's eye in warm colors using the first illumination optical systems 31L and 31R, and therefore enables detailed observation of the structure and morphology of the subject's eye.

[0050] In some aspects, each of the optical axes OL and OR is movable relative to the optical axis of the objective lens 20. The direction of the relative movement is a direction that intersects the optical axis of the objective lens 20, and the relative movement is represented by a displacement vector in which at least one of the x component and the y component is not zero. In some aspects, the optical axes OL and OR may be mutually independently movable. On the other hand, in some aspects, the optical axes OL and OR may be integrally movable. For example, the surgical microscope 10 includes a movement mechanism (31d) configured to move the first illumination optical systems 31L and 31R mutually independently or integrally, and therefore the movement mechanism moves the first illumination optical systems 31L and 31R mutually independently or integrally in a direction intersecting the optical axis of the objective lens 20. Such a configuration makes it possible to conduct adjustment of the appearance condition (appearance state) of the subject's eye. In some aspects, the movement mechanism operates under the control of a controller (the controller 200 described later).

[0051] In some aspects, the optical axis OS is movable relative to the optical axis of the objective lens 20. The direction of the relative movement is a direction that intersects the optical axis of the objective lens 20, and the relative movement is represented by a displacement vector in which at least one of the x component and the y component is not zero. For example, the surgical microscope 10 includes a movement mechanism (32d) configured to move the second illumination optical system 32, and therefore the movement mechanism moves the second illumination optical system 32 in a direction that intersects the optical axis of the objective lens 20. With such a configuration, it becomes possible to conduct adjustment of the appearance condition (appearance state) of unevenness and irregularities of sites and tissues of the subject's eye. In some aspects, the movement mechanism operates under the control of a controller (the controller 200 described later).

[0052] As described above, the present aspect is configured such that the illumination optical system 30 is arranged at the position directly above the objective lens 20 (the position in the transmission direction of the dichroic mirror DM1) and the observation optical system 40 is arranged at the position in the reflection direction of the dichroic mirror DM1. For example, the observation optical system 40 may be arranged in such a manner that the angle formed by the optical axis of the observation optical system 40 and the plane perpendicular to the optical axis of the objective lens 20 (the xy plane) belongs to the range between -20 degrees and +20 degrees.

[0053] According to the configuration of the present aspect, the observation optical system 40, which typically has a longer optical path length than the illumination optical system 30, is arranged substantially parallel to the xy plane. Hence, the observation optical system 40 of the present aspect does not interfere with the surgeon's field of view while conventional surgical microscopes, whose observation optical system is oriented along the vertical direction in front of the surgeon's eyes, do. Therefore, the surgeon is capable of easily seeing the screen of the display device 3 arranged in front of the surgeon. In other words, the visibility of displayed information (images and videos of the subject's eye, and other various kinds of reference information) during surgery etc. is improved. In addition, since the housing is not placed in front of the surgeon's eyes, it does not give a sense of oppression to the surgeon, thereby reducing the burden on the surgeon.

[0054] The observation optical system 40 is an optical system for observation of an image formed based on return light of the illumination light incident from the subject's eye through the objective lens 20. In the present aspect, the observation optical system 40 guides the image to an image sensor of the imaging camera 60.

[0055] As described above, the observation optical system 40 includes the observation optical system for left eye 40L and the observation optical system for right eye 40R. The configuration of the observation optical system for left eye 40L and the configuration of the observation optical system for right eye 40R are the same as or similar to one another. In some aspects, the observation optical system for left eye 40L and the observation optical system for right eye 40R may be configured in such a manner that their optical arrangements can be changed independently of each other.

[0056] The zoom expander 50 is also referred to as a beam expander, a variable beam expander, or the like. The zoom expander 50 includes the zoom expander for left eye 50L and the zoom expander for right eye 50R. The configuration of the zoom expander for left eye 50L and the configuration of the zoom expander for right eye 50R are the same as or similar to each other. In some aspects, the zoom expander for left eye 50L and the zoom expander for right eye 50R may be configured in such a manner that their optical arrangements can be changed independently of each other.

[0057] The zoom expander for left eye 50L includes the plurality of zoom lenses 51L, 52L, and 53L. At least one of the zoom lenses 51L, 52L, and 53L is movable in the direction along the optical axis with a variable magnification mechanism (not shown in the drawings).

[0058] Similarly, the zoom expander for right eye 50R includes the plurality of zoom lenses 51R, 52R, and 53R, and at least one of the zoom lenses 51R, 52R, and 53R is movable in the direction along the optical axis with a variable magnification mechanism (not shown in the drawings).

[0059] The variable magnification mechanism(s) may be configured to move a zoom lens of the zoom expander for left eye 50L and a zoom lens of the zoom expander for right eye 50R mutually independently or integrally in the directions along the optical axes. As a result of this, the magnification ratio for photographing the subject's eye is changed. In some aspects, the variable magnification mechanism(s) operates under the control of a controller (the controller 200 described later).

[0060] The imaging camera 60 is a device that photographs an image formed by the observation optical system 40 and generates digital image data. The imaging camera 60 is typically a digital camera (digital video camera). The imaging camera 60 includes the imaging camera for left eye 60L and the imaging camera for right eye 60R. The configuration of the imaging camera for left eye 60L and the configuration of the imaging camera for right eye 60R are the same as or similar to one another. In some aspects, the imaging camera for left eye 60L and the imaging camera for right eye 60R may be configured such that their optical arrangements can be changed independently of each other.

[0061] The imaging camera for left eye 60L includes the imaging lens 61L and the image sensor 62L. The imaging lens 61L forms an image based on the return light that has passed through the zoom expander for left eye 50L, on the imaging surface (light receiving surface) of the image sensor 62L. The image sensor 62L is an area sensor, and may typically be a charge-coupled device (CCD) image sensor or a complementary metal oxide semiconductor (CMOS) image sensor. The image sensor 62L operates under the control of a controller (the controller 200 described later).

[0062] The imaging camera for right eye 60R includes the imaging lens 61R and the image sensor 62R. The imaging lens 61R forms an image based on the return light that has passed through the zoom expander for right eye 50R, on the imaging surface (light receiving surface) of the image sensor 62R. The image sensor 62R is an area sensor, and may typically be a CCD image sensor or a CMOS image sensor. The image sensor 62R operates under the control of a controller (the controller 200 described later).

< Processing system >



[0063] The processing system of the ophthalmic observation apparatus 1 will be described. Examples of a basic configuration of the processing system are shown in FIG. 3 and FIG. 4, and some examples of specific configurations are shown in FIG. 5, FIG.6, and FIG.7. Any two or more of the various kinds of configuration examples described below may be combined at least in part. Note that the configuration of the processing system is not limited to the examples described below.

[0064] The controller 200 executes a control of each part of the ophthalmic observation apparatus 1. The controller 200 includes the main controller 201 and the memory 202. The main controller 201 includes a processor and executes a control of each part of the ophthalmic observation apparatus 1. For example, the processor may load and run a program stored in the memory 202 or another storage device, thereby implementing a function according to the present aspect. In addition, the processor may use (e.g., referring, processing, calculating, etc.) data and/or information stored in the memory 202 or another storage device in order to implement a function according to the present aspect.

[0065] The main controller 201 may control the light sources 31LA, 31RA, and 32A of the illumination optical system 30, the image sensors 62L and 62R of the observation optical system 40, the movement mechanisms 31d and 32d, the variable magnification mechanisms 50Ld and 50Rd, the operation device 2, the display device 3, and other component parts.

[0066] Controls of the light source 31LA include turning on and off the light source, adjusting the light amount, adjusting the diaphragm (aperture), and so forth. Controls of the light source 31RA include turning on and off the light source, adjusting the light amount, adjusting the diaphragm (aperture), and so forth. The main controller 201 may perform mutually exclusive controls of the light sources 31LA and 31RA. Controls of the light source 32A include turning on and off the light source, adjusting the light amount, adjusting the diaphragm (aperture), and so forth.

[0067] In the case where the illumination optical system 30 includes a light source whose color temperature can be varied, the main controller 201 may change the color temperature of emitted illumination light by controlling such a light source.

[0068] Controls of the image sensor 62L include exposure adjustment, gain adjustment, photographing rate adjustment, and so forth. Controls of the image sensor 62R include exposure adjustment, gain adjustment, photographing rate adjustment, and so forth. Further, the main controller 201 may control the image sensors 62L and 62R in such a manner that the photographing timings of the image sensors 62L and 62R match each other, or in such a manner that the difference between the photographing timings of the image sensors 62L and 62R lies within a predetermined time. In addition, the main controller 201 may perform a control of loading digital data obtained by the image sensors 62L and 62R.

[0069] The movement mechanism 31d moves the light sources 31LA and 31RA mutually independently or integrally in a direction that intersects the optical axis of the objective lens 20. By controlling the movement mechanism 31d, the main controller 201 moves the optical axes OL and OR mutually independently or integrally with respect to the optical axis of the objective lens 20.

[0070] The movement mechanism 32d moves the light source 32A in a direction that intersects the optical axis of the objective lens 20. By controlling the movement mechanism 32d, the main controller 201 moves the optical axis OS with respect to the optical axis of the objective lens 20.

[0071] The movement mechanism 70 moves the surgical microscope 10. For example, the movement mechanism 70 is configured to integrally move at least part of the illumination optical system 30 and the observation optical system 40. This configuration makes it possible to change the relative positions of the at least part of the illumination optical system 30 and the observation optical system 40 with respect to the subject's eye while maintaining the relative positional relationship between at least part of the illumination optical system 30 and the observation optical system 40. In some aspects, the movement mechanism 70 is configured to integrally move the first illumination optical systems 31L and 31R and the observation optical system 40. With this, the relative positions of the first illumination optical systems 31L and 31R with respect to the subject's eye and the relative position of the observation optical system 40 with respect to the subject's eye can be changed while maintaining the state (condition) of coaxial illumination. In some aspects, the movement mechanism 70 is configured to integrally move the second illumination optical system 32 and the observation optical system 40. With this, the relative positions of the second illumination optical system 32 and the observation optical system 40 with respect to the subject's eye can be changed while maintaining the illumination angle for oblique illumination. In some aspects, the movement mechanism 70 is configured to integrally move the first illumination optical systems 31L and 31R, the second illumination optical system 32, and the observation optical system 40. This makes it possible to change the relative positions of the illumination optical system 30 and the observation optical system 40 with respect to the subject's eye while maintaining both the state (condition) of coaxial illumination and the illumination angle for oblique illumination. The movement mechanism 70 operates under a control of the controller 200.

[0072] In some aspects, the main controller 201 may be configured to control at least two of the movement mechanisms 31d, 32d, and 70 in an interlocking manner.

[0073] The variable magnification mechanism 50Ld moves at least one of the plurality of zoom lenses 51L to 53L of the zoom expander for left eye 50L in the optical axis direction (direction along the optical axis). The main controller 201 changes the magnification ratio of the observation optical system for left eye 40L by controlling the variable magnification mechanism 50Ld.

[0074] Similarly, the variable magnification mechanism 50Rd moves at least one of the plurality of zoom lenses 51R to 53R of the zoom expander for right eye 50R in the optical axis direction (direction along the optical axis). The main controller 201 changes the magnification ratio of the observation optical system for right eye 40R by controlling the variable magnification mechanism 50Rd.

[0075] Controls for the operation device 2 include an operation permission control, an operation prohibition control, an operation signal transmission control and/or an operation signal reception control from the operation device 2, and other controls. The main controller 201 receives an operation signal generated by the operation device 2 and executes a control corresponding to the operation signal received.

[0076] Controls for the display device 3 include an information display control and other controls. As a display controller, the main controller 201 displays various kinds of information on the display device 3. For example, the main controller 201 displays an image based on digital image data generated by the image sensors 62L and 62R on the display device 3. Typically, the main controller 201 may display a moving image (video, movie) based on digital image data (video signal) generated by the image sensors 62L and 62R on the display device 3. Further, the main controller 201 may display a still image (frame) included in the moving image on the display device 3. In addition, the main controller 201 may display an image (a moving image, a still image, etc.) obtained by processing the digital image data generated by the image sensors 62L and 62R on the display device 3. Furthermore, the main controller 201 may display, on the display device 3, any information generated by the ophthalmic observation apparatus 1, any information acquired from the outside by the ophthalmic observation apparatus 1, and other types of information.

[0077] Further, the main controller 201 may create an image for left eye from the digital image data generated by the image sensor 62L and create an image for right eye from the digital image data generated by the image sensor 62R, and then display the created image for left eye and the created image for right eye on the display device 3 in such a manner as to enable stereoscopic vision. For example, the main controller 201 may create a pair of left and right parallax images from the image for left eye and the image for right eye, and display the pair of parallax images on the display device 3. With this, the user (e.g., surgeon) can recognize the pair of parallax images as a stereoscopic image by using a known stereoscopic method or technique. The stereoscopic method applicable to the present aspect may be freely selected, and for example, may be any of the following methods: a stereoscopic method for naked eyes; a stereoscopic method using an auxiliary device (polarized glasses, etc.); a stereoscopic method by applying image processing (image synthesis, image composition, rendering, etc.) to an image for left eye and an image for right eye; a stereoscopic method by displaying a pair of parallax images simultaneously; a stereoscopic method by alternately displaying a pair of parallax images; and a stereoscopic method of a combination of two or more of the above methods.

[0078] The data processor 210 executes various kinds of data processes. Some examples of processing that may be executed by the data processor 210 will be described below. The data processor 210 (each element thereof) includes a processor that operates on the basis of predetermined software (program), and is implemented by the cooperation of hardware and software.

[0079] Some examples of processing that may be executed by the data processor 210 will be described together with related elements. FIG. 4 shows an example of a basic configuration of the data processor 210. The data processor 210 of the present example includes the analyzing processor 211 and the target position determining processor 212.

[0080] The surgical microscope 10 is configured to generate a moving image (live image) by photographing the subject's eye. The controller 200 is configured to capture a moving image frame (still image) and input the captured moving image frame into the analyzing processor 211. In some examples, the controller 200 captures a frame from a moving image in response to a manual or automatic trigger (upon receiving a manual or automatic trigger).

[0081] The analyzing processor 211 is configured to analyze a frame captured from the moving image to detect an image of a predetermined site of the subject's eye, that is, to detect an image region corresponding to a predetermined site of the subject's eye. The image region to be detected from the frame by the analyzing processor 211 may be, for example, an image region identified as an image of the predetermined site of the subject's eye, or an image region obtained by processing the image of the predetermined site of the subject's eye (and its vicinity). Examples of the image region obtained by processing the image of the predetermined site of the subject's eye (and its vicinity) include an approximate figure such as an approximate ellipse or an approximate circle.

[0082] The predetermined site of the subject's eye detected by the analyzing processor 211 may be a freely selected or determined site (part) of the subject's eye. In the case where a target site (a part or region of the subject's eye) to be observed by the ophthalmic observation apparatus 1 is the anterior eye segment, the analyzing processor 211 may be configured to detect any of the following sites, for example: the pupil (its entirety, or a feature site such as the pupil edge, the pupil center, or the pupil center of gravity), the cornea (its entirety, or a feature site such as the corneal ring, the corneal edge, the corneal center, or the corneal apex), the iris (its entirety, or a feature site such as the iris inner edge, the iris outer edge, or the iris pattern), the anterior chamber (its entirety, or a feature site such as the anterior border, or the posterior border), the corner angle (its entirety, a peripheral site, etc.), the crystalline lens (its entirety, or a feature site such as the lens capsule, the anterior capsule, the posterior capsule, or the lens nucleus), the ciliary body, the zonule of Zinn, a blood vessel, and a lesion. In the case where a target site to be observed by the ophthalmic observation apparatus 1 is the posterior eye segment, the analyzing processor 211 may be configured to detect any of the following sites, for example: the optic nerve head, the macula, a blood vessel, the retina (its entirety, the surface, or one or more sub-tissues), the choroid (its entirety, the anterior surface, the posterior surface, or one or more sub-tissues), the sclera (its entirety, the anterior surface, the posterior surface, or one or more sub-tissues), the vitreous body (its entirety, an opaque region, a floating object (floater), a detached tissue, etc.), and a lesion. In the case where a target site to be observed by the ophthalmic observation apparatus 1 is not a tissue of an eye ball, the analyzing processor 211 may be configured to detect a freely selected site or tissue such as the eyelid, the meibomian glands, or the orbit (eye socket, eye pit). The site to be detected by the analyzing processor 211 may be selected or determined depending on an illumination method employed, a site subject to surgery, a surgical method conducted, or other factors.

[0083]  The analyzing processor 211 may be configured to detect an image of a predetermined site of the subject's eye from a still image using a freely selected region extraction method or technique. In some examples of detecting an image of a site characterized by its brightness (an image of a site that has a distinctive feature in brightness), the analyzing processor 211 may be configured to perform detection of an image of this site of the subject's eye from a still image using brightness thresholding such as binarization. In the case of detecting an image of a site characterized by its shape (an image of a site that has a distinctive feature in shape), the analyzing processor 211 may be configured to perform detection of an image of this site of the subject's eye from a still image using shape analysis processing such as pattern matching. In the case of detecting an image of a site characterized by its color tone (an image of a site that has a distinctive feature in color tone), the analyzing processor 211 may be configured to perform detection of an image of this site of the subject's eye from a still image using color analysis processing such as feature color extraction. In some examples, the analyzing processor 211 may be configured to detect an image of a predetermined site of the subject's eye by applying segmentation to a still image to identify an image of this site of the subject's eye. Typically, segmentation is image processing for identifying a partial region (subregion) in a given image. Segmentation may include any known image processing technique, and some examples of which may include image processing such as edge detection and/or machine learning (e.g., deep learning) based segmentation.

[0084] The image of the predetermined site of the subject's eye (information that represents the position of the image, the area of the image, etc.) detected by the analyzing processor 211 is input into the target position determining processor 212. The target position determining processor 212 is configured to determine a target position for a predetermined treatment (a predetermined medical treatment, a predetermined procedure, a predetermined medical procedure) based at least on the image of the predetermined site detected by the analyzing processor 211.

[0085] The predetermined treatment may be a medical practice of any kind, such as a treatment conducted in surgery or a treatment conducted in therapy. In some examples, the predetermined treatment may be any one or more of the following items in cataract surgery: incision creation (corneal incision, sclerocorneal incision), ophthalmic viscosurgical device (OVD) injection, crystalline lens anterior capsulotomy (CCC), crystalline lens phacoemulsification and aspiration, IOL insertion, ophthalmic viscosurgical device removal, and incision closure. In some examples, the predetermined treatment may be any of the following treatment items: any treatment conducted in refractive surgery, any treatment conducted in minimally invasive glaucoma surgery (MIGS), any treatment conducted in vitreoretinal surgery, and any treatment conducted in other ophthalmic surgery.

[0086] Several examples of the target position determination processing will be described. In the first example, preliminary measurement (preparatory measurement) is performed on the subject's eye. Measurement data obtained from the subject's eye by the preliminary measurement is stored in the memory 202 of the ophthalmic observation apparatus 1 or stored in a storage device accessible by the ophthalmic observation apparatus 1. The first example can be implemented by means of the configuration example shown in FIG. 5. The data processor 210A is an example of the data processor 210 of FIG. 4, and the target position determining processor 212A is an example of the target position determining processor 212 of FIG. 4.

[0087] In some aspect examples where the predetermined treatment is ocular incision (e.g., corneal incision, sclerocorneal incision, conjunctival incision, scleral incision, etc.), the preliminary measurement may be any of the following measurement items, for example: aberration measurement of the subject's eye by means of a wavefront sensor, corneal topography (corneal shape measurement) of the subject's eye by means of a corneal topographer, and corneal topography of the subject's eye by means of an anterior eye segment OCT apparatus. Examples of the wavefront sensor and the corneal topographer are described, for example, in International Publication No. WO 2003/022138. Anterior eye segment OCT is described, for example, in International Publication No. WO 2017/154348.

[0088] The measurement data 203 shown in FIG. 5 is an example of data obtained from the subject's eye by the preliminary measurement. The measurement data 203 may include, for example, aberration data (corneal aberration data) of the subject's eye obtained using the wavefront sensor, corneal shape data of the subject's eye obtained using the corneal topographer, and corneal shape data of the subject's eye obtained using the anterior eye segment OCT apparatus.

[0089]  The target position determining processor 212A of the first example is configured to determine a target position for the predetermined treatment based at least on the image of the predetermined site of the subject's eye detected by the analyzing processor 211 and the measurement data 203. If the predetermined treatment is an ocular incision (e.g., corneal incision, sclerocorneal incision, etc.), the target position determining processor 212A may determine a position where corneal astigmatism that can result from the incision is reduced the most.

[0090] For example, based on the measurement data 203 (e.g., corneal shape data, corneal aberration data, etc.), the target position determining processor 212A can determine a position where the incision has little impact (influence, effect) on corneal astigmatism. Typically, an incision is made at a position of the steepest meridian (first principal meridian, principal meridian, major meridian) of the cornea in order to correct corneal astigmatism before the incision by incision-induced astigmatism (steepest meridian incision treatment). In this case, the measurement data 203 may include data that represents the steepest meridian position of the cornea of the subject's eye, that is, data that represents an angle direction of the steepest meridian. In a wider sense, the measurement data 203 may include data that represents an approximate target position (rough target position) in the subject's eye used to determine a specific target position (accurate target position).

[0091] For example, the target position determining processor 212A may be configured to identify a position corresponding to the direction of the steepest meridian in the corneal ring (or, a position corresponding to the direction of the steepest meridian on a closed curve apart from the corneal ring in an outward direction by a predetermined distance) of the subject's eye, based on steepest meridian data included in the measurement data 203 or steepest meridian data derived from the measurement data 203 and on an image of the corneal ring detected by the analyzing processor 211. Then, the target position determining processor 212A can set the identified position as a target incision position of the subject's eye.

[0092] In a wider sense, the target position determining processor 212A may be configured to identify a position corresponding to an approximate target position in the predetermined site (or, a position corresponding to an approximate target position in a location, area, or site that satisfies a predetermined condition with respect to the predetermined site) of the subject's eye, based on an approximate target position included in the measurement data 203 or an approximate target position determined from the measurement data 203 and on an image of the predetermined site detected by the analyzing processor 211. Then, the target position determining processor 212A can set the identified position as a target application position of the predetermined treatment for the subject's eye.

[0093] The number of target positions determined by the target position determining processor 212A may be freely determined. In the case where the predetermined treatment is an ocular incision, the target position determining processor 212A may be configured to determine, for example, a position of a main port (main wound, main incision) and one or more positions of one or more side ports (one or more side wounds, one or more side incisions) may be determined. The number and/or positions of target positions may be set based on various kinds of conditions such as the type of the predetermined treatment, the kind or degree of disease, the kind of instrument used, and doctor's preference.

[0094] In the second example of the target position determination processing, measurement of the subject's eye is performed by the ophthalmic observation apparatus 1 (or by a system including the ophthalmic observation apparatus 1). In other words, the second example does not require preliminary measurement as in the first example to be performed. Therefore, the second example can be employed even in the case where measurement data is not prepared in advance.

[0095] Note that the second example may be operated, optionally, to refer to measurement data obtained by preliminary measurement. For example, some aspect examples may be configured to process both the first measurement data obtained by the preliminary measurement and the second measurement data obtained by using the ophthalmic observation apparatus 1 to generate the third measurement data, and then to perform target position determination processing on the basis of the third measurement data. The processing for generating the third measurement data from the first and second measurement data may include freely selected or designed statistical processing. Also, some aspect examples may be configured to select either one of the first measurement data obtained by the preliminary measurement or the second measurement data obtained by the ophthalmic observation apparatus 1, and then to perform target position determination processing on the basis of the measurement data selected.

[0096]  The measurement data obtained from the subject's eye by means of the ophthalmic observation apparatus 1 is stored in the memory 202 of the ophthalmic observation apparatus 1, or stored in a storage device accessible by the ophthalmic observation apparatus 1. The configuration example shown by FIG. 6 may be applied to the second example. The data processor 210B is an example of the data processor 210 of FIG. 4, and the target position determining processor 212B is an example of the target position determining processor 212 of FIG. 4.

[0097] The ophthalmic observation apparatus 1 of the aspect example shown by FIG. 6 further includes the measuring unit 220. The measuring unit 220 has an ophthalmic measuring function for obtaining measurement data of the subject's eye, which is optional. Similar to the ophthalmic measurement apparatus to which the preliminary measurement of the first example is also applied, the measuring unit 220 may include any of a wavefront sensor, a corneal topographer, and an anterior eye segment OCT apparatus in the case where the predetermined treatment is an ocular incision.

[0098] The measurement data 203 shown in FIG. 6 is an example of data acquired from the subject's eye by means of the measuring unit 220. The measurement data 203 may include, for example, any of the following kinds of data: aberration data (corneal aberration data) of the subject's eye acquired by means of the wavefront sensor; corneal shape data of the subject's eye acquired by means of the corneal topographer; and corneal shape data of the subject's eye acquired by means of the anterior eye segment OCT apparatus.

[0099] The target position determining processor 212B of the second example is configured to determine the target position for the predetermined treatment based at least on the image of the predetermined site of the subject's eye detected by the analyzing processor 211 and the measurement data 203 acquired by the measuring unit 220.

[0100] Similar to the target position determining processor 212A of the first example, if the predetermined treatment is an ocular incision, the target position determining processor 212B may be configured to determine a position where corneal astigmatism that can result from the incision is minimized. The attributes of the target position and the content of the target position determination processing in the second example may be the same as or similar to those in the first example.

[0101] As in the case of considering corneal astigmatism that can be caused by an ocular incision, in the case where the target position is determined in consideration of the influence of the predetermined treatment on the subject's eye, the configuration example shown in FIG. 7 may be employed. The data processor 210C is an example of the data processor 210B of FIG. 6, and the target position determining processor 212C is an example of the target position determining processor 212B of FIG. 6.

[0102] The target position determining processor 212C includes the assessing processor 213. The assessing processor 213 is configured to assess influence of the predetermined treatment on the subject's eye based on the measurement data of the subject's eye acquired by the measuring unit 220. For example, the assessing processor 213 assesses influence of an ocular incision (e.g., corneal incision, sclerocorneal incision, etc.) on corneal astigmatism of the subject's eye. In some aspects, the assessment processing may include, for example, any of the following processes: a process of determining a direction of the steepest meridian from corneal shape data or corneal aberration data; a process of determining the refractive power in the direction of the steepest meridian; a process of determining a direction of a flat meridian (flatter meridian, flattest meridian, second principal meridian, minor meridian); a process of determining the refractive power in the direction of the flat meridian; and a process of obtaining a distribution of refractive powers.

[0103] Furthermore, the assessment processing of some aspect examples may include a simulation of the state (condition) of the cornea after an ocular incision based at least on the state of the cornea before the ocular incision. The state of the cornea may be any one or more of corneal shape data, corneal aberration data, the direction of the steepest meridian, the refractive power in the direction of the steepest meridian, the direction of the flat meridian, the refractive power in the direction of the flat meridian, the refractive power distribution, and like information. A simulation of some examples may include a rule-based processing performed according to a predetermined simulation program. Also, a simulation of some examples may include processing performed by means of a machine learning system (e.g., neural network, inference model) constructed by supervised learning with training data that includes states of corneas before ocular incisions, attributes of ocular incisions (e.g., incision positions, incision shapes, incision sizes), and states of corneas after ocular incisions. Simulations that can be performed by the assessing processor 213 are not limited to these examples. The assessing processor 213 of some examples may be configured to execute a simulation by means of other methods or techniques such as unsupervised learning, or may be configured to execute a simulation by means of an at-least-in-part combination of two or more methods or techniques.

[0104] The target position determining processor 212C of FIG. 7 is configured to determine the target position for the predetermined treatment for the subject's eye based at least on the image of the predetermined site detected by the analyzing processor 211 and a result acquired by the assessing processor 213. For example, the target position determining processor 212C may determine, as the target position for the predetermined treatment for the subject's eye, a position where corneal astigmatism that can result from the ocular incision is expected to be the least amount.

[0105] In the third example of the target position determination processing, the target position for the predetermined treatment for the subject's eye is determined without referring to measurement data of the subject's eye as described by the first and second examples. The configuration example of FIG. 4 (or any of the configuration examples of FIG. 5 to FIG. 7) may be employed to the third example. The target position determining processor 212 of the third example is configured to determine, as the target position for the predetermined treatment for the subject's eye, a position away from the image of the predetermined site of the subject's eye detected by the analyzing processor 211 by a predetermined distance. The predetermined distance is defined, for example, as a real distance (e.g., millimeters, micrometers, or the like) or as an image distance (e.g., number of pixels, or the like).

[0106] For example, in an anterior capsulotomy (CCC) for cataract surgery, the analyzing processor 211 may find a feature of the anterior segment of the subject's eye. Here, the feature of the anterior eye segment of the subject's eye may include, for example, any of the following kinds of information: an image of the corneal ring; a size of the image of the corneal ring; an image of the pupil edge; and a size of the image of the pupil edge. Based on the feature of the anterior eye segment obtained by the analyzing processor 211, the target position determining processor 212 may determine a figure that has a predetermined shape and a predetermined size. This figure is, for example, a circle with a diameter of 6 millimeters. This circle may be any of the following circles, for example: a circle with a radius of 3 millimeters centered at the center of an approximate circle (or an approximate ellipse) to the corneal ring; a circle with a radius of 3 millimeters centered at the center of an approximate circle (or an approximate ellipse) to the pupil edge; a circle with a diameter of 6 millimeters that is obtained by enlarging (or reducing) an approximate circle to the corneal ring in an isotropic manner; a circle of a diameter of 6 millimeters which is obtained by enlarging (reducing) an approximate circle to the pupil edge in an isotropic manner. It should be noted that the shape, size, or other parameters of the figure may be changed according to user's preference.

[0107] Further, the analyzing processor 211 may be configured to detect an image of the corneal ring of the subject's eye in vitreoretinal surgery. The target position determining processor 212 may be configured to set, as a target position for inserting a surgical instrument, a position away from the detected image of the corneal ring in an outward direction by a predetermined distance (e.g., 3 to 4 millimeters). Note that the shape, size, or other parameters of the figure may be changed according to user's preference.

[0108] An example of target position determination processing (including distance calculation processing) applicable to the third example (or other examples) will be described. The following is a description of the case where the predetermined site of the subject's eye detected by the analyzing processor 211 is a corneal ring (iris outer edge) is considered. The same or similar processing can also be applied to the cases where other sites (e.g., the pupil edge) are considered.

[0109] As preliminary processes (preparatory processes), the following processes may be performed prior to surgery: performing photography of the anterior eye segment of the subject's eye; recording the distance between the subject's eye and the photographing apparatus (working distance (WD)) at the time of the photography; detecting the corneal ring from the photographed image obtained in the photography; measuring the size (e.g., diameter) of the corneal ring detected; and recording the value obtained by the size measurement. Note that the method or technique applied to the preliminary operation of obtaining corneal ring size may be freely selected or determined. For example, corneal ring size may be measured from an OCT image of the anterior eye segment (e.g., cross sectional image or three dimensional image), or may be measured directly without using an image.

[0110] Under the above preparations, the following processes may be performed during the surgery: acquiring an observation image by performing photography of the subject' eye by means of the ophthalmic observation apparatus 1 (the surgical microscope 10): recording a working distance at the time of the photography (by means of the controller 200); detecting the corneal ring from the observation image obtained in the photography (by means of the analyzing processor 211); measuring the size of the corneal ring detected (by means of the data processor 210); and recording a value obtained by the corneal ring size measurement (by means of the controller 200).

[0111] Furthermore, based on the relationship between the working distance at the time of the preoperative photography and the working distance at the time of the intraoperative photography (e.g., the ratio of the two working distances), the target position determining processor 212 finds the relationship between the scale (e.g., actual distance per pixel) of the photographed image obtained prior to the surgery and the scale of the observation image obtained during the surgery (e.g., the ratio of the scales of the two images). Furthermore, based on the relationship between the scale of the preoperative image and the scale of the intraoperative image thus determined, the target position determining processor 212 identifies the position (location, area, site) that is a predetermined distance (e.g., 3 to 4 millimeters) outward from the image of the corneal ring detected in the intraoperative image. In the present example, the controller 200 (the main controller 201) may display an observation image (live image) of the subject's eye on the display device 3, and also display target position information that represents the identified position (target position) on the observation image. The target position information may be in any form. Examples of the form of the target position information may be a circular indicator, elliptic indicator, or arc-shaped indicator.

[0112] Some modification examples will be described. In the preoperative photography, photography may be performed with a ruler applied to the subject's eye (anterior eye segment), and the actual distance may be measured based on the ruler depicted in the photographed image obtained. Likewise, in the intraoperative photography, photography may be performed with a ruler applied to the subject's eye (anterior eye segment), and the actual distance may be measured based on the ruler depicted in the photographed image obtained. While the distance from the corneal ring to the target position is 3 to 4 millimeters in the above example, the distance may be freely determined. In some examples, the distance from the corneal ring to the target position may be determined based on any of surgeon's preference, corneal shape (e.g., corneal curvature or corneal curvature distribution), and other factors.

[0113] In the case where any of the configuration examples of FIG. 4 to FIG. 7 or another configuration example is applied, the controller 200 receives a live image of the subject's eye from the surgical microscope 10 and displays the live image on the display device 3. At the same time, the controller 200 displays, on the display device 3, the target position information that represents the target position determined by the analyzing processor 211 and the target position determining processor 212.

[0114] According to the ophthalmic observation apparatus 1 configured in this way, the target position for the predetermined treatment to be applied to the subject's eye can be provided to the user together with a live image, so that the user can see and understand in real time the position to which the predetermined treatment should be applied. This makes it possible to facilitate surgery (or treatment etc.), shorten the time required for surgery (or treatment etc.), reduce or prevent mistakes or errors in treatment, reduce burdens on the patient, and so forth.

[0115] The ophthalmic observation apparatus 1 may be configured to execute real time updates of the target position information displayed together with the live image generated by the surgical microscope 10. In order to implement this real time update function, the analyzing processor 211 may be configured to sequentially detect images of the predetermined site of the subject's eye from frames (still images) sequentially generated as a moving image by the surgical microscope 10, in parallel with generation of the moving image performed by the surgical microscope 10. In parallel with both the generation of the moving image and the detection of the images, the target position determining processor 212 sequentially determines the target position for the predetermined treatment based at least on the images of the predetermined site sequentially detected by the analyzing processor 211. In parallel with the generation of the moving image, the detection of the images, and the target position determination, the controller 200 displays a live image (sequential updating of frames) and perform updates of the target position information (sequential switching of the target position information) based on the target positions sequentially determined by the target position determining processor 212. According to such a configuration, the user can become aware, in real time, of a change in the target position for the predetermined treatment while observing a live image of the subject's eye. Note that the change in the target position is caused by factors such as the movement of the subject's eye.

[0116] When two or more treatments are performed sequentially or in parallel, the ophthalmic observation apparatus 1 can perform different processing contents for individual treatments. For example, the following treatments are performed in cataract surgery: creation of an incision (corneal incision, sclerocorneal incision), injection of an ophthalmic viscosurgical device, anterior capsulotomy (CCC), phacoemulsification and aspiration, insertion of an IOL, removal of the ophthalmic viscosurgical device, and closure of the incision.

[0117] Typically, in the case where the first to N-th treatments are applied to the subject's eye, the analyzing processor 211 may analyze the n-th still image generated by the surgical microscope 10 prior to the n-th treatment, thereby detecting an image of the n-th site of the subject's eye.

[0118] Here, N is an integer equal to or greater than 2, and n is any integer belonging to the range between 1 and N. For any two integers n1 and n2 both belonging to the range between 1 and N (n1 and n2 are mutually different integers), the n1-th treatment and the n2-th treatment may be either the same kind of treatment or different kinds of treatments, and the n1-th site and the n2-th site may be either the same site or different sites.

[0119] Furthermore, the target position determining processor 212 may determine the n-th target position, which is the target position for the n-th treatment, based at least on an image of the n-th site detected from the n-th still image.

[0120] In addition, the controller 200 may display, for the n-th treatment, the live image acquired by the surgical microscope 10 and the n-th target position information that represents the n-th target position (n-th display control).

[0121] In the case where two or more treatments are performed sequentially (one after another), the ophthalmic observation apparatus 1 may be configured to execute automatic detection of transitions of treatments (shifting of treatments, switching of treatments). For example, the data processor 210 is configured to detect the transition from the first treatment to the second treatment (by means of a treatment transition detecting processor). It should be noted that the ophthalmic observation apparatus 1 may be configured to detect treatment transitions in response to an instruction issued by the user. The instruction may be input, for example, by a manual operation, voice, or the like.

[0122] Many surgeries and treatments are conducted according to predetermined procedures. For example, cataract surgery is conducted according to a series of procedures (a series of treatments) including incision creation, ophthalmic viscosurgical device injection, anterior capsulotomy, phacoemulsification and aspiration, IOL insertion, ophthalmic viscosurgical device removal, and incision closure. Taking this fact into account, as an operation mode corresponding to a specific surgery (or, a specific treatment or a specific therapy), processing contents (e.g., image detection, target position determination, target position information display, etc.) for individual procedures of the surgery can be determined and set in advance.

[0123] By referring to such preset information, the ophthalmic observation apparatus 1 is capable of automatically detecting an event that one procedure has completed and the next procedure (new procedure, new treatment) has been commenced, and further performing an operation mode associated in advance with the new procedure.

[0124] The automatic detection of procedure transitions (treatment transitions) may be implemented, for example, by analyzing a live image to detect the state of a specific site of the subject's eye and/or by analyzing a live image to detect an image of an instrument and/or an image of an intraocular implantation device.

[0125] For example, automatic detection of a transition to an anterior capsulotomy (CCC) procedure in cataract surgery may be implemented by a combination of detection of a state in which a corneal incision has already been formed and detection of a state in which a treatment on the anterior lens capsule has not yet been performed. In some alternative examples, automatic detection of a transition to an anterior capsulotomy (CCC) procedure may be implemented by a combination of detection of a state in which a corneal incision has already been formed and detection of an image of an instrument for CCC.

[0126]  Note that procedures or treatments to be automatically detected are not limited to anterior capsulotomy (CCC). Procedures or treatments to be automatically detected may be other procedures or treatments in cataract surgery, any procedures or treatments in any other kinds of surgery, or any kinds of procedures or treatments in any kinds of medical practices (treatments, therapies, diagnosis, etc.). Specific methods or techniques applied to automatic detections of transitions to procedures (treatments) other than anterior capsulotomy (CCC) may be performed in the same manner as the above example for anterior capsulotomy (CCC).

< Operation and usage mode >



[0127] The operation and the usage mode of the ophthalmic observation apparatus 1 will be described. FIG. 8 A and FIG. 8B show an example of the operation and the usage mode of the ophthalmic observation apparatus 1. While the present example describes an application to the corneal incision phase and the anterior capsulotomy (CCC) phase conducted in cataract surgery, substantially the same or a similar operation and substantially the same or a similar usage mode as those in the present example can also be implemented for applications to other phases of cataract surgery or applications to other medical practices (surgery, therapy, treatment, diagnosis, etc.).

(S1: Commence generation and display of live image)



[0128] To begin with, the user performs a predetermined operation using the operation device 2 to cause the ophthalmic observation apparatus 1 to start generating and displaying a live image of the subject's eye (the anterior eye segment thereof). More specifically, the surgical microscope 10 illuminates the subject's eye by the illumination optical system 30, and at the same time (in parallel, simultaneously) generates digital image data (moving image, video) of the subject's eye by the image sensors 62L and 62R. The generated moving image (the live image 301) is displayed in real time on the display device 3 (see FIG. 9A). In other words, a moving image acquired by the surgical microscope 10 is displayed on the display device 3 as a live image (as an observation image). The user can conduct surgery while observing the live image.

(S2: Select cataract surgery mode)



[0129] The ophthalmic observation apparatus 1 displays, on the display device 3, a screen used for selection (designation) of operation modes, for example, in response to an instruction issued by the user. Options of the operation modes may include, for example, a cataract surgery mode, a vitreoretinal surgery mode, and so forth. The user selects a desired operation mode using the operation device 2. In the present example, the cataract surgery mode is selected.

(S3: Move to corneal incision phase)



[0130] The cataract surgery has started, and the treatment phase moves to a corneal incision.

(S4: Detect image of predetermined site from live image)



[0131] Next, the analyzing processor 211 detects an image of the predetermined site of the subject's eye from the live image (from a frame(s) of the live image) the generation of which has started in the step S1. The site to be detected for the corneal incision may be, for example, the pupil edge or the corneal ring.

[0132] In order to detect an image of the predetermined site, in some aspects, the analyzing processor 211 first applies image processing, such as binarization, edge detection, segmentation, or other methods, to a frame of the live image 301. In the example shown in FIG. 9B, the pupil edge image 302 is detected from the live image 301.

(S5: Determine target position for corneal incision)



[0133] Next, the target position determining processor 212 determines a corneal incision target position in the live image based at least on the image of the predetermined site (the pupil edge) detected in the step S4. The target position determining processor 212 performs, for example, any of the processing examples described above.

[0134] In the example shown in FIG. 9C, the target position determining processor 212 first identifies the location (position, site, area) 303 that is away from the pupil edge image 302 detected in the step S4 in an outward direction by a predetermined distance. Next, the target position determining processor 212 finds the steepest meridian 304 of the cornea of the subject's eye from the measurement data 203 (e.g., corneal shape data, corneal aberration data, etc.) acquired in advance (see FIG. 9D). The target position determining processor 212 determines, as the corneal incision target position, the position(s) at which the steepest meridian 304 and the position 303, which is the position away from the pupil edge image 302 in the outward direction by the predetermined distance, intersect each other. In the example shown in FIG. 9D, two corneal incision target positions 305a and 305b are determined.

(S6: Display target position information for corneal incision on live image)



[0135] Next, the controller 200 displays target position information that represents the corneal incision target position determined in the step S5 on the live image of the subject's eye.

[0136] In the example shown in FIG. 9E, the two pieces of target position information 306a and 306b are displayed on the live image 301. The two pieces of target position information 306a and 306b represent the two corneal incision target positions 305a and 305b determined in the example shown in FIG. 9D, respectively. The two pieces of target position information 306a and 306b represent the positions (locations, areas) in which incisions should be made in the corneal incision. Each of the two pieces of target position information 306a and 306b is an arc-shaped indicator image. The aspect (e.g., length, radius of curvature, thickness, etc.) of the arc-shaped indicator image may be determined in advance, and/or may be freely changed according to user's instructions. This allows the user to easily recognize the position (location, area, part) in the cornea to be incised. In addition, the present example can enhance the facilitation of the steepest meridian incision for the purpose of reducing postoperative corneal astigmatism.

(S7: Move to anterior capsulotomy?)



[0137] The series of the steps S4 to S6 is repeated at the earliest until the corneal incision is completed. Typically, the series of the steps S4 to S6 is repeatedly performed until the commencement of an ophthalmic viscosurgical device injection or the commencement of an anterior capsulotomy (CCC). The series of the steps S4 to S6 is sequentially applied to the frames that are sequentially acquired as the live image 301. For example, the series of the steps S4 to S6 may be applied to individual frames acquired as the live image 301 (time-series images). Alternatively, the series of the steps S4 to S6 may be applied to one or more frames selected from all the frames acquired as the live image 301. The process of selecting frames may include, for example, thinning of selecting frames at intervals of a predetermined number of frames. Such repetitive processing allows the user to easily recognize in real time the position (location, area, part) in the cornea to be incised, even when the corneal incision target position changes due to the movement of the subject's eye.

[0138] Corneal incision target position information (e.g., the arc-shaped indicator image(s)) may be displayed at any stage after the corneal incision is made. For example, the corneal incision target position information may be displayed in response to an instruction issued by the user. This allows the user to confirm the target position for corneal incision at a desired timing. For example, there is a problem that side ports are so small that they are easy to be lost during surgery; however, according to the present example, the user is able to confirm and know the position of the side ports at any time. It should be noted that when the position represented by the corneal incision target position information is actually incised, the controller 200 may store the coordinate information of the position represented by this corneal incision target position information in the memory 202 or another storage device. On the other hand, when a position different from the position represented by the corneal incision target position information is actually incised, the controller 200 may store, in the memory 202 or another storage device, the coordinate information of the position where the incision has actually been made. Here, together with the coordinate information of the actually incised position, the controller 200 may further store the coordinate information of the position represented by the corneal incision target position information in the memory 202 or another storage device.

(S8: Detect image of predetermined site from live image)



[0139] Upon the commencement of the anterior capsulotomy (CCC) phase, the analyzing processor 211 detects an image of the predetermined site of the subject's eye from the live image (from a frame(s) of the live image) whose generation has started in the step S1. The site to be detected for the anterior capsulotomy may be, for example, the pupil edge or the corneal ring.

[0140] In order to detect an image of the predetermined site, in some aspects, the analyzing processor 211 first applies image processing, such as binarization, edge detection, segmentation, or other methods, to a frame of the live image 301. In the example shown in FIG. 10A, the pupil edge image 311 is detected from the live image 301 as in the corneal incision phase.

(S9: Determine target position for anterior capsulotomy)



[0141] Next, the target position determining processor 212 determines the anterior capsulotomy target position in the live image based at least on the image of the predetermined site (pupil edge) detected in the step S8. The target position determining processor 212 performs, for example, any of the processing examples described above.

[0142] In the example shown in FIG. 10B, the target position determining processor 212 identifies the position that is away from the pupil edge image 311 detected in the step S8 in an inward direction by a predetermined distance. Then, the target position determining processor 212 sets the identified position as the anterior capsulotomy target position 312.

(S10: Display target position information for anterior capsulotomy on live image)



[0143] Next, the controller 200 displays target position information that represents the anterior capsulotomy target position determined in the step S9 on the live image of the subject's eye. In the example shown in FIG. 10C, the target position information 313 indicating the anterior capsulotomy target position 312 determined in the example shown in FIG. 10B, is displayed on the live image 301. The target position information 313 is a circular indicator image (instead, it may be an arc-shaped indicator image), which prompts the user to perform the anterior capsulotomy along a part of this indicator image. The aspect of the circular indicator image may be determined in advance, and/or may be freely changed according to user's instructions. This allows the user to easily perceive the position of the anterior lens capsule to be incised.

(S11 : Move to next treatment?)



[0144] The series of the steps S8 to S10 is repeated at the earliest until the anterior capsulotomy is completed. Typically, the series of the steps S8 to S10 is repeatedly performed until the commencement of phacoemulsification and aspiration. The series of the steps S8 to S10 is sequentially applied to the frames that are sequentially acquired as the live image 301. For example, the series of the steps S8 to S10 may be applied to individual frames acquired as the live image 301 (time-series images). Alternatively, the series of the steps S8 to S10 may be applied to one or more frames selected from all the frames acquired as the live image 301. The process of selecting frames may include, for example, thinning of selecting frames at intervals of a predetermined number of frames. Such repetitive processing allows the user to easily recognize in real time the position (location, area, part) in the anterior lens capsule to be incised, even when the anterior capsulotomy target position changes due to the movement of the subject's eye.

[0145]  Anterior capsulotomy target position information (e.g., the circular indicator image) may be displayed at any stage after the incision is made on the anterior lens capsule. For example, the anterior capsulotomy target position information may be displayed in response to an instruction issued by the user. This allows the user to confirm the target position for anterior capsulotomy at a desired timing. When the position represented by the anterior capsulotomy target position information is actually incised, the controller 200 may store the coordinate information of the position represented by this anterior capsulotomy target position information in the memory 202 or another storage device. On the other hand, when a position different from the position represented by the anterior capsulotomy target position information is actually incised, the controller 200 may store, in the memory 202 or another storage device, the coordinate information of the position where the incision has actually been made. Here, together with the coordinate information of the actually incised position, the controller 200 may further store the coordinate information of the position represented by the anterior capsulotomy target position information in the memory 202 or another storage device.

(S12: Apply other treatments)



[0146] After the completion of the anterior capsulotomy, the user sequentially conducts subsequent treatments such as phacoemulsification and aspiration, IOL insertion, ophthalmic viscosurgical device removal, and incision closure. This completes the cataract surgery (End).

[0147] Some other aspect examples are briefly described below. In vitreoretinal surgery, for example, the analyzing processor 211 first applies image processing (e.g., binarization) to an observation image to detect an image of the corneal ring. Next, the target position determining processor 212 identifies a position that is a predetermined distance (e.g., 3 to 4 millimeters) outward from the detected image of the corneal ring as a target position. Then, the controller 200 displays an indicator (e.g., an indicator having a circular shape) at the target position.

[0148] FIG. 11 shows an example of information displayed for vitreoretinal surgery. In the present example, along with the live image 401 acquired by the surgical microscope 10, the circular indicator image 403 that represents the position that is the predetermined distance (e.g., 3 to 4 millimeters) outward from the image of the corneal ring 402 is displayed. The user can create a port by making a scleral incision (conjunctival incision) in vitreoretinal surgery while referring to the indicator image 403. The port provides a pathway for inserting a light guide (illumination) and various kinds of instruments into the eye. FIG. 11 presents the three ports 404a, 404b, and 404c formed on the indicator image 403. The ophthalmic observation apparatus 1 may record the coordinate information of each of the ports formed, and furthermore, may display indicator images respectively indicating the positions of the ports on the live image in response to a request from the user, for example.

[0149] As described thus far, ophthalmic surgeries are typically conducted by inserting an instrument(s) into the eye. The present disclosure may be configured to, for example, display the optimal treatment position (e.g., instrument insertion position, incision position, perforation position, etc.) on a live image. For example, by displaying an indicator (e.g., an image) that represents a treatment target position during surgery, the marking of the treatment target position can be made within the user's field of view, thereby facilitating the surgery.

[0150] Furthermore, the present disclosure can execute tracking of the treatment target position by sequentially applying processing to a plurality of frames that are sequentially acquired as a live image. In addition, the present disclosure can present the treatment position on the live image after the corresponding treatment has actually been executed.

[0151] Although the present disclosure describes several applications to cataract surgery (e.g., at the time of corneal incision, at the time of anterior capsulotomy, etc.) and vitreoretinal surgery, those skilled in the art will understand that possible applications of the present disclosure are not limited thereto.

[0152] Note that some aspects can, at the time of corneal incision for cataract surgery, display a figure (e.g., an arc-shaped line) that serves as an incision target based on a feature of the anterior eye segment (e.g., the corneal ring, the pupil, the size of the corneal ring, the size of the pupil, etc.) and measurement data of the subject's eye. The size and the shape of this figure may be changeable. It should also be noted that marking processing for the corneal incision can be made prior to or during the surgery by carrying out the following processes: acquiring corneal data by means of a wavefront sensor, a corneal topographer, or an anterior eye segment OCT; identifying a position where the effect on corneal astigmatism is small; and finding a feature position (e.g., the pupil edge, the pupil center) from a live image.

[0153] Some aspects can, at the time of anterior capsulotomy for cataract surgery, display a figure (e.g., a circle of a diameter of 6 millimeters) determined from a feature of the anterior eye segment (e.g., the corneal ring, the pupil, the size of the corneal ring, the size of the pupil, etc.) as an incision target. The size and the shape of this figure may be changeable. Note that marking processing for the anterior capsulotomy can be made prior to or during the surgery by carrying out the following processes: acquiring a predetermined kind of data (e.g., a corneal ring diameter, a pupil diameter); determining the size of the indicator image (figure) to be displayed; detecting a feature part (e.g., the corneal ring, the pupil edge) from a live image; and displaying an indicator image having the size determined.

[0154] Some aspects can, at the time of vitreoretinal surgery, display a figure (e.g., a circular image) that serves as an instrument insertion target (e.g., perforation target, port formation target) at a position away from the corneal ring in an outward direction by a predetermined distance (e.g., 3 to 4 millimeters). The size and the shape of this figure may be changeable.

[0155] In surgeries, treatments, therapies, examinations, and so forth other than the above-described kinds of surgeries and the like, target position information indicating a position to which a predetermined treatment should be applied, can be displayed together with a live image.

[0156] This makes it possible to achieve easier treatment, shorter treatment, fewer errors in treatment, and less burden on patients during surgery, treatment, and examination.

< Method of controlling ophthalmic observation apparatus >



[0157] Some embodiment examples (e.g., the ophthalmic observation apparatus 1 described above) provide a method of controlling an ophthalmic observation apparatus. It is possible to combine any items or matters relating to the ophthalmic observation apparatus 1 of the above embodiment examples with the example of the method described below.

[0158] An ophthalmic observation apparatus controlled by the method of an aspect example includes a processor (e.g., the controller 200 and the data processor 210) and generates a moving image of a subject's eye (the surgical microscope 10). The method of the present aspect example first causes the processor to analyze a still image included in the moving image to detect an image of a predetermined site of the subject's eye. Further, the method of the present aspect example causes the processor to determine a target position for a predetermined treatment based at least on the image of the predetermined site detected from the still image. In addition, the method of the present aspect example causes the processor to display the moving image on a display device and also display target position information that represents the determined target position on the display device.

[0159] The method of the present aspect example is capable of achieving the same actions and effects as those of the ophthalmic observation apparatus 1 of the above embodiment examples. In addition, by combining any of the matters and items relating to the ophthalmic observation apparatus 1 of the above embodiment examples with the method of the present aspect example, the resulting method becomes capable of achieving the actions and effects corresponding to the combined matters and/or items.

< Program >



[0160] Some embodiment examples provide a program causing a computer to execute the method of the aspect example described above. It is possible to combine any of the matters and items relating to the ophthalmic observation apparatus 1 of the above embodiment examples with such a program.

[0161] The program thus configured is capable of achieving the same actions and effects as those of the ophthalmic observation apparatus 1 of the above embodiment examples. In addition, by combining any of the matters and items relating to the ophthalmic observation apparatus 1 of the above embodiment examples with the program, the resulting program is capable of achieving the actions and effects corresponding to the combined matters and/or items.

< Recording medium >



[0162] Some embodiment examples provide a computer-readable non-transitory recording medium storing the program described above. It is possible to combine any of the matters and items relating to the ophthalmic observation apparatus 1 of the above embodiment examples with such a recording medium. The non-transitory recording medium may be in any form, and examples thereof include magnetic disks, optical disks, magneto-optical disks, and semiconductor memories.

[0163] The recording medium thus configured is capable of achieving the same actions and effects as those of the ophthalmic observation apparatus 1 of the above embodiment examples. In addition, by combining any of the matters and items relating to the ophthalmic observation apparatus 1 of the above embodiment examples with the recording medium, the resulting recording medium is capable of achieving the actions and effects corresponding to the combined matters and/or items.

[0164] The embodiments described in the present disclosure are merely examples, and any modification, omission, addition, substitution, etc. can be made within the scope of the present disclosure and its equivalents.

EXPLANATION OF REFERENCE CHARACTERS



[0165] 

1 Ophthalmic observation apparatus

2 Operation device

3 Display device

10 Surgical microscope

30 Illumination optical system

40 Observation optical system

200 Controller

203 Measurement data

210, 210A, 210B, 210C Data processor

211 Analyzing processor

212, 212A, 212B, 212C Target position determining processor

213 Assessing processor

220 Measuring unit




Claims

1. An ophthalmic observation apparatus for observing a subject's eye, comprising:

a moving image generating unit configured to generate a moving image by photographing the subject's eye;

an analyzing processor configured to analyze a still image included in the moving image to detect an image of a predetermined site of the subject's eye;

a target position determining processor configured to determine a target position for a predetermined treatment based at least on the image of the predetermined site detected by the analyzing processor; and

a display controller configured to display, on a display device, the moving image and target position information that represents the target position.


 
2. The ophthalmic observation apparatus according to claim 1, wherein
the target position determining processor is configured to determine the target position based at least on the image of the predetermined site detected by the analyzing processor and measurement data of the subject's eye acquired in advance.
 
3. The ophthalmic observation apparatus according to claim 1, further comprising a measuring unit configured to acquire measurement data of the subject's eye, wherein
the target position determining processor is configured to determine the target position based at least on the image of the predetermined site detected by the analyzing processor and the measurement data acquired by the measuring unit.
 
4. The ophthalmic observation apparatus according to claim 3, wherein

the target position determining processor includes an assessing processor configured to assess influence of the predetermined treatment on the subject's eye based on the measurement data acquired by the measuring unit, and

the target position determining processor is configured to determine the target position based at least on the image of the predetermined site detected by the analyzing processor and a result obtained by the assessing processor.


 
5. The ophthalmic observation apparatus according to any of claims 1 to 4, wherein
the target position determining processor is configured to determine, as the target position, a position away from the image of the predetermined site detected by the analyzing processor by a predetermined distance.
 
6. The ophthalmic observation apparatus according to any of claims 1 to 5, wherein when a first treatment and a second treatment are applied to the subject's eye,

the analyzing processor performs first detection and second detection, the first detection including analysis of a first still image generated by the moving image generating processor prior to the first treatment to detect an image of a first site of the subject's eye, and the second detection including analysis of a second still image generated by the moving image generating processor prior to the second treatment to detect an image of a second site of the subject's eye,

the target position determining processor performs first determination and second determination, the first determination including determination of a first target position that is a target position for the first treatment based at least on the image of the first site, and the second determination including determination of a second target position that is a target position for the second treatment based at least on the image of the second site, and

the display controller performs a first display control and a second display control, the first display control including a control for displaying the moving image and first target position information that represents the first target position for the first treatment, and the second display control including a control for displaying the moving image and second target position information that represents the second target position for the second treatment.


 
7. The ophthalmic observation apparatus according to claim 6, further comprising a treatment change detecting unit configured to detect a change from the first treatment to the second treatment.
 
8.  A method of controlling an ophthalmic observation apparatus that includes a processor and generates a moving image of a subject's eye, the method comprising:

causing the processor to analyze a still image included in the moving image to detect an image of a predetermined site of the subject's eye;

causing the processor to determine a target position for a predetermined treatment based at least on the image of the predetermined site detected; and

causing the processor to display, on a display device, the moving image and target position information that represents the target position.


 
9. A program configured to cause a computer to execute the method of claim 8.
 
10. A computer-readable non-transitory recording medium storing the program of claim 9.
 




Drawing


























































Search report










Cited references

REFERENCES CITED IN THE DESCRIPTION



This list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.

Patent documents cited in the description