T 0509/18 () of 3.3.2020

European Case Law Identifier: ECLI:EP:BA:2020:T050918.20200303
Date of decision: 03 March 2020
Case number: T 0509/18
Application number: 12765209.7
IPC class: B60K28/06
B60W50/14
A61B5/18
G06K9/00
Language of proceedings: EN
Distribution: D
Download and more information:
Decision text in EN (PDF, 317 KB)
Documentation of the appeal procedure can be found in the Register
Bibliographic information is available in: EN
Versions: Unpublished
Title of application: SYSTEM AND METHOD FOR DETERMINING DRIVER ALERTNESS
Applicant name: TK Holdings Inc.
Opponent name: -
Board: 3.2.01
Headnote: -
Relevant legal provisions:
European Patent Convention Art 83 (2007)
Keywords: sufficiency of disclosure (no)
Catchwords:

-

Cited decisions:
-
Citing decisions:
-

Summary of Facts and Submissions

I. European patent application No. 12 765 209.7 was refused by the decision of the Examining Division posted on 13 October 2017. The Examining division found that the subject-matter of claim 1 was not new over D1 (US 6 927 694 B1) and D11 (Ji Qiang et al: "Real-Time Eye, Gaze and Face Pose Tracking for Monitoring Driver Vigilance", Real-Time Imaging Vol 8, 357-377, 2002, Elsevier Publishing). Against this decision an appeal was lodged by the Applicant in due form and in due time pursuant to Article 108 EPC.

II. With its statement of grounds of appeal the Appellant (Applicant) filed a new main request and a first and second auxiliary request.

III. In a communication dated of 20 December 2019 the Board informed the Appellant that it concurred with the view taken in the appealed decision and that the amendments introduced into claim 1 of the main request, as well as into claim 1 of the first and second auxiliary request could not render the subject-matter of claim 1 new over D1 and D11. In particular, these amendments included features which were not clearly defined in claim 1 and in the application (hereinafter designated as WO-A) and which could not contribute to novelty over D1 and D11.

The Board also expressed the view that the application did not meet the requirements of Article 84 EPC and Article 83 EPC, for e.g. the features "classification training process", "matrix of inter-point metrics" and "eye vector" were not sufficiently clearly and completely disclosed, such that the skilled person would not be able to put the invention into effect.

IV. With its letter dated of 24 February 2020 the Appellant filed a third auxiliary request and presented arguments as to why the subject-matter of claim 1 of the main (and of the auxiliary requests) was new and inventive over the prior art, and illustrated why the invention was considered as being sufficiently clearly and completely disclosed, further submitting two documents as evidence of common general knowledge of the skilled person:

US-B1-7 538 744;

"An Experimental Multimedia System Allowing 3-D Visualization and Eye-Controlled Interaction Without User-Worn Devices", Siegmund Pastoor, Jin Liu et al., IEEE TRANSACTIONS ON MULTIMEDIA, Vol. 1, No. 1, March 1999, Pages 41-52).

V. Oral proceedings were held on 3 March 2020. The Appellant requested that the decision under appeal be set aside and that a patent be granted on the basis of the main request or alternatively on the basis of the first or second auxiliary requests filed with its statement of grounds of appeal or the third auxiliary request filed with letter dated 24 February 2020.

VI. Claim 1 of the main request reads as follows:

"A driver alertness detection system, comprising:

an imaging unit (110) configured to image an area in a vehicle compartment of a vehicle where a driver's head is located;

an image processing unit (1120) configured to receive the image from the imaging unit (1110), and to determine positions of the driver's head and eyes and configured to use a classification training process to register the driver's head position and eye vector at several pre-determined points within the vehicle to be used for classification of the driver's attention state; and

a warning unit (1130) configured to determine, based on the determined position of the driver's head and eyes as output by the image processing unit (1120), whether the driver is in an alert state or a non-alert state, and to output a warning to the driver when the driver is determined to be in the non-alert state,

wherein the image processing unit (1120) determines that the driver is in the non-alert state when the determined position of the driver's head is determined not to be within a predetermined driver head area region within the vehicle compartment or when the driver's eyes are determined to be angled to an extent so as not to be viewing an area in front of the vehicle, and,

wherein, based on the driver's attention state, an appropriate warning is provided to the driver, so that, if the driver is detected to be in an attention partially diverted state, a mild warning is provided to the driver, and, when the driver is detected to be in an attention fully diverted state, a loud warning is provided to the driver."

Claim 1 of the first auxiliary request reads as follows:

"A driver alertness detection system, comprising:

an imaging unit (110) configured to image an area in a vehicle compartment of a vehicle where a driver's head is located;

an image processing unit (1120) configured to receive the image from the imaging unit (1110), and to determine positions of the driver's head and eyes configured to use a classification training process to register the driver's head position and eye vector at several pre-determined points within the vehicle, and configured to save a matrix of inter-point metrics to be used for a look-up-table classification of the driver's attention state ; and

a warning unit (1130) configured to determine, based on the determined position of the driver's head and eyes as output by the image processing unit (1120), whether the driver is in an alert state or a non-alert state, and to output a warning to the driver when the driver is determined to be in the non-alert state,

wherein the image processing unit (1120) determines that the driver is in the non-alert state when the determined position of the driver's head is determined not to be within a predetermined driver head area region within the vehicle compartment or when the driver's eyes are determined to be angled to an extent so as not to be viewing an area in front of the vehicle, and,

wherein, based on the driver's attention state, an appropriate warning is provided to the driver, so that, if the driver is detected to be in an attention partially diverted state, a mild warning is provided to the driver, and, when the driver is detected to be in an attention fully diverted state, a loud warning is provided to the driver."

Claim 1 of the second auxiliary request reads as follows:

"A driver alertness detection system, comprising:

an imaging unit (110) configured to image an area in a vehicle compartment of a vehicle where a driver's head is located;

an image processing unit (1120) configured to receive the image from the imaging unit (1110), and to determine positions of the driver's head and eyes configured to use a classification training process to register the driver's head position and eye vector at several pre-determined points within the vehicle, and configured to save a matrix of inter-point metrics to be used for a look-up-table classification of the driver's attention state; and

a warning unit (1130) configured to determine, based on the determined position of the driver's head and eyes as output by the image processing unit (1120), whether the driver is in an alert state or a non-alert state, and to output a warning to the driver when the driver is determined to be in the non-alert state,

wherein the image processing unit (1120) determines that the driver is in the non-alert state when the determined position of the driver's head is determined not to be within a predetermined driver head area region within the vehicle compartment or when the driver's eyes are determined to be angled to an extent so as not to be viewing an area in front of the vehicle, and,

wherein, based on the driver's attention state according to the look-up-table classification, an appropriate warning is provided to the driver, so that, if the driver is detected to be in an attention partially diverted state, a mild warning is provided to the driver, and, when the driver is detected to be in an attention fully diverted state, a loud warning is

provided to the driver."

Claim 1 of the third auxiliary request reads as follows:

"A driver alertness detection system, comprising:

an imaging unit (110) configured to image an area in a vehicle compartment of a vehicle where a driver's head is located, and

an image processing unit (1120) configured to receive the image from the imaging unit (1110), and to determine positions of the driver's head and eyes, wherein:

the driver alertness detection system is configured to use a classification training process to register the driver's head position and eye vector for the A-pillars, instrument panel, outside mirrors, rear-view mirror, windshield, passenger floor, center console, radial and climate controls within the vehicle, and configured to save a corresponding matrix of inter-point metrics to be used for a look-up-table classification of the driver's attention state, the inter-point metrics being geometric relationships between detected control points and comprising a set of vectors connecting any combination of control points including pupils, nostrils and corners of the mouth, whereon the driver alertness detection system further comprises;

a warning unit (1130) configured to determine, based on the determined position of the driver's head and eyes as output by the image processing unit (1120), whether the driver is in an alert state or a non-alert state, and to output a warning to the driver when the driver is determined to be in the non-alert state,

wherein the image processing unit (1120) determines that the driver is in the non-alert state when the determined position of the driver's head is determined not to be within a predetermined driver head area region within the vehicle compartment or when the driver's eyes are determined to be angled to an extent so as not to be viewing an area in front of the vehicle, and,

wherein, based on the driver's attention state according to the look-up-table classification, an appropriate warning is provided to the driver, so that, if the driver is detected to be in an attention partially diverted state, a mild warning is provided to the driver, and, when the driver is detected to be in an attention fully diverted state, a loud warning is

provided to the driver."

VII. The Appellant's arguments (as far as relevant to the present decision) may be summarized as follows:

The subject-matter of claim 1 of the main, first, second and third auxiliary request in conjunction with the application (WO-A) discloses the invention in a manner sufficiently clear and complete for the skilled person to be able to carry it out. Referring in particular to figures 8A, 8B and 8C and to paragraph [0029] of WO-A the skilled person would understand the intention of independent claim 1 to carry out a training process in which the driver's head position and eye vector is registered for predetermined items in the vehicle, and a matrix of inter-point metrics as defined are saved in a look-up-table.

The inter-point metrics represents geometric relationships between detected control points, such as vectors including any combination of control points including pupils, nostrils and corners of the mouth.

In particular, the driver's head position is registered for items such as A-pillars, instrument panel, outside mirrors, rear-view mirror, windshield, passenger floor, center console, radial and climate control within the vehicle.

Thus, when a detected state (corresponding to the actual position at any given instant in time) of the driver's head and eyes is determined, based on this detected position, according to the look-up-table classification, an appropriate warning is provided to the driver if necessary.

The warning is provided to the driver when the detected state (e.g. head position and eye vector) matches a predetermined state (i.e. one of said states registered for a predetermined item) which represents (according to the classification training and to the look-up table) a non-alert state of the driver.

Reasons for the Decision

1. The appeal is admissible.

2. The invention as defined in claim 1 of the third auxiliary request (including all the features of claim 1 of each of the main, first and second auxiliary request) is not disclosed in the European patent application 12 765 209.7 (WO-A) in a manner sufficiently clear and complete for it to be carried out by a person skilled in the art (Article 83 EPC).

The feature (of claim 1) reading "the driver alertness detection system is configured to use a classification training process to register the driver's head position and eye vector for the A-pillars, instrument panel, outside mirrors, rear view mirror, windshield, passenger floor, center console, radial and climate controls within the vehicle, and configured to save a corresponding matrix of inter-point metrics to be used for a look-up-table classification of the driver's attention state, the inter-point metrics being geometric relationships between detected control points and comprising a set of vectors connecting any combination of control points including pupils, nostrils and corners of the mouth" is derived from paragraphs [0027], [0029], [0033] in WO-A and constitutes the central feature on which the driver alertness detection system of the invention is based.

Further, according to paragraph [0029] in WO-A, figure 8A is an image showing a driver in a full-alert state, figure 8B is an image showing a driver in an attention partially diverted state, and figure 8C is an image showing a driver in an attention entirely diverted state.

In the Board's view the definition of claim 1 and the corresponding passages in the patent application (WO-A) do not teach the skilled person how a "look-up-table classification of the driver's attention state" is to be obtained by the skilled person, based on said "matrix of inter-point metrics", the inter-point metrics representing "geometric relationships between detected control points and comprising a set of vectors connecting any combination of control points including pupils, nostrils and corners of the mouth".

In particular, WO-A does not teach how to derive from said "matrix of inter-point metrics" a "look-up-table classification of the driver's attention state", such a "look-up-table classification" permitting to decide on the driver's attention state. A "matrix of inter-point metrics" being a mathematical object representing a set of "geometrical relationships between detected control points" according to WO-A (and to claim 1), a specific mathematical method and corresponding criteria (or algorithms) necessarily have to be determined in order to be able to handle said matrix and to deal with said matrix. No such mathematical methods and corresponding criteria allowing to handle said matrix and obtain a "look-up-table classification" are disclosed or even suggested in the description of WO-A. In addition, the actual specific form and construction of said "matrix of inter-point metrics" is likewise not specified in WO-A. Therefore the skilled person would not know how to construct a "look-up-table classification" and consequently how to decide on the driver's attention state based on the video camera's image of the actual position of driver's head and eyes at a given instant.

In addition, claim 1 and WO-A likewise do not teach how a video camera's image representing the instant position of a driver's head and eyes (as seen e.g. in figures 8A, B or C) should be actually compared with a hypothetical "look-up-table classification" in order to assess the driver's attention state. In effect, this step requires instructions and teaching concerning the kind of information to be extracted from a given video camera image and concerning the method and the criteria (similarly as above) to be applied in order to compare this information with the information included in the hypothetical "look-up-table classification". No such disclosure is to be found in the description of the patent application (WO-A).

The same conclusions apply a fortiori to claim 1 of the main, first and second auxiliary request, since the subject-matter of each of these claims includes only part of the features of claim 1 of the third auxiliary request, thus including even less information than is included in claim 1 of the third auxiliary request.

It ensues that, for the same reasons as indicated in relation to claim 1 of the third auxiliary request, claim 1 of aforesaid auxiliary requests (in conjunction with the description) likewise does not disclose the invention in a manner sufficiently clear and complete for the skilled person to be able to carry out e.g. a "classification of the driver's attention state".

Order

For these reasons it is decided that:

The appeal is dismissed.

Quick Navigation