T 0980/15 (Hearing system/SONOVA) of 21.3.2019

European Case Law Identifier: ECLI:EP:BA:2019:T098015.20190321
Date of decision: 21 March 2019
Case number: T 0980/15
Application number: 07821399.8
IPC class: H04R 25/00
Language of proceedings: EN
Distribution: D
Download and more information:
Decision text in EN (PDF, 346 KB)
Documentation of the appeal procedure can be found in the Register
Bibliographic information is available in: EN
Versions: Unpublished
Title of application: HEARING SYSTEM AND METHOD FOR OPERATING A HEARING SYSTEM
Applicant name: Sonova AG
Opponent name: Sivantos Pte. Ltd.
Board: 3.5.03
Headnote: -
Relevant legal provisions:
European Patent Convention Art 56
Rules of procedure of the Boards of Appeal Art 12(4)
Keywords: Inventive step - (yes)
Admissibility of late-filed document E8 - (no)
Catchwords:

-

Cited decisions:
T 1002/92
Citing decisions:
-

Summary of Facts and Submissions

I. The present case concerns an appeal filed by the opponent (henceforth, appellant) against the decision of the opposition division which held that, account being taken of the amendments made by the proprietor according to a first auxiliary request, the patent No. 2 201 793 and the invention to which it relates met the requirements of the EPC. In particular, the subject-matter of claims 1, 8 and 9 of the first auxiliary request was new with respect to the disclosure of document E1 and involved an inventive step when starting out from E1 and taking into account common general knowledge, and in view of a combination of documents E1 and E4 (see document list below, point VII).

II. In its statement of grounds of appeal, the appellant requested that the decision under appeal be set aside and that the patent be revoked. It argued that the subject-matter of claims 1 and 8 was not new with respect to the disclosure of E1 and did not involve an inventive step when starting out from E1 and taking into account common general knowledge, and in view of a combination of documents E1 and E8 (see document list below). Document E8 was filed for the first time with the statement of grounds of appeal.

III. In a letter of reply to the appeal, the patent proprietor (henceforth, respondent) requested that the appeal be dismissed (main request). Alternatively, it requested that the patent be maintained in amended form in accordance with the claims of either a first or a second auxiliary request filed therewith.

IV. In a communication accompanying a summons to oral proceedings, the board gave a preliminary opinion that the subject-matter of claim 1 appeared to be new and to involve an inventive step with respect to the disclosure of E1 taking into account common general knowledge or E4. The board also stated that the admissibility of E8 was an issue to be considered, although it took the preliminary view that E8 was admissible.

V. With a submission dated 19 February 2019, the appellant filed claims of a third auxiliary request.

VI. Oral proceedings were held on 21 March 2019.

When it came to a discussion of E8, the respondent (patent proprietor) requested that E8 be not admitted.

The final requests were established to be as follows:

The appellant requested that the decision under appeal be set aside and that the patent be revoked.

The respondent requested that the appeal be dismissed or, in the alternative, that the patent be maintained in amended form on the basis of one of first and second auxiliary requests as submitted with the letter dated 2 December 2015 or on the basis of a third auxiliary request submitted with the letter dated 19 February 2019.

At the conclusion of the oral proceedings, the chairman announced the board's decision.

VII. The following documents are relevant to the board's decision:

E1 = EP 1 841 286 A2

E4 = EP 0 681 411 A1

E8 = EP 1 404 152 A2

VIII. Claim 1 of the main request reads as follows:

"Method for operating a hearing system (1) comprising

at least one hearing device (10);

at least one signal processing unit (103);

at least one user control (111,112) by means of which at least one audio processing parameter of said signal processing unit (103) is adjustable;

a sensor unit (104), wherein said sensor unit (104) receives sound (5);

said method comprising the steps of

a) obtaining adjustment data (userCorr) representative of adjustments of said at least one parameter carried out by operating said at least one user control (111,112);

b) obtaining characterizing data (pl,...,pN) from data outputted from said sensor unit (104) substantially at the time said adjustment data (userCorr) are obtained, wherein said sensor unit comprises a classifying unit (104) for classifying said received sound (5) according to N sound classes, with an integer N >= 2, and wherein said characterizing data comprise similarity factors (pl,...,pN) which are indicative of the similarity between said received sound (5) and sound representative of a respective class;

c) deriving correction data (learntCorr) from said adjustment data (userCorr);

wherein step c) is carried out in dependence of said similarity factors (pl,...,pN), and wherein a time-dependent function is used for carrying out step c); and

d) recognizing an update event; and, upon step d):

e) using corrected settings for said at least one audio processing parameter in said signal processing unit (103), which corrected settings are derived in dependence of said correction data (learntCorr)."

IX. Independent claims 8 and 9 are claims respectively for a hearing system and "computer program products" [sic] which essentially correspond technically to method claim 1. For the sake of conciseness, the wording of these claims is not reproduced.

X. In view of the board's decision, it is not necessary to reproduce the claims of the auxiliary requests.

Reasons for the Decision

1. Main request - claim 1 - novelty and inventive step

1.1 The present patent concerns a hearing device or system having parameters adjustable automatically and/or by the user. An example of the latter is output volume (cf. paragraph [0076] ff.). When the device is turned on (this being an example of an "update event"), the volume is set to a stored start value ("default value"). The stored start value evolves over time to a new value ("correction data") based on the user's volume adjustments ("adjustment data"). The hearing system also takes account of the acoustic situation by classifying the received signal into two or more sound classes, which for example represent typical acoustic situations such as "pure speech", "speech in noise", "noise", or "music" (cf. paragraph [0070]).

1.2 The closest prior art document E1 relates to a similar type of hearing system as that just described.

1.3 Using the wording of claim 1, E1 discloses a method for operating a hearing system comprising

at least one hearing device (Fig. 1);

at least one signal processing unit (2, 7);

at least one user control (9) by means of which at least one audio processing parameter of said signal processing unit (2, 7) is adjustable;

a sensor unit (1, 2, 7), wherein said sensor unit receives sound;

the method comprising the steps of

a) obtaining adjustment data ("Lautstärkereinstellung") representative of adjustments of said at least one parameter carried out by operating said at least one user control (cf. paragraph [0033]: "Dabei wird die Lautstärkeeinstellung durch den Benutzer sprunghaft verändert.");

b) obtaining characterizing data (here, an updated value of a parameter representing a start value ("Startwert" or "Speicherwert"; cf. paragraph [0019]) from data outputted from said sensor unit substantially at the time said adjustment data are obtained (cf. paragraph [0030]: "Dabei können die Signalverarbeitung betreffende Parameter auch automatisch durch die Steuereinheit 7 ermittelt und eingestellt werden, z. B. infolge einer Analyse der augenblicklichen Hörsituation.") - N.B.: it is apparent that, in use, the user would on occasion change the volume setting, resulting in obtaining new "adjustment data" substantially at the same time as, for example, a change to a noisy environment),

wherein said sensor unit comprises a signal analyser for analysing the actual acoustic situation (paragraph [0030]: "Analyse der augenblicklichen Hörsituation"),

c) deriving correction data from said adjustment data; wherein step c) is carried out in dependence of the characterizing data, and wherein a time-dependent function is used for carrying out step c) (cf. Fig. 2, in particular the dotted lines representing the evolution of the start value, and claim 1: "Automatisches Bestimmen eines von dem Startwert und dem aktuellen Wert des Parameters verschiedenen Speicherwertes derart, dass der Speicherwert, ausgehend von dem Startwert, dem aktuellen Wert des Parameters in Abhängigkeit der Zeitdauer, für die der aktuelle Wert des Parameters eingestellt ist, angenähert wird"); and

d) recognizing an update event ("Ein- bzw. Umschalten", see step e)); and, upon step d):

e) using corrected settings for said at least one audio processing parameter in said signal processing unit, which corrected settings are derived in dependence of said correction data (cf. claim 1: "Speichern des Speicherwertes in der Speichereinrichtung derart, dass nun nach dem Ein- bzw. Umschalten der Speicherwert als neuer Startwert automatisch eingestellt wird").

1.4 The subject-matter of claim 1 therefore differs from the disclosure of E1 in that:

(i) the signal analyser comprises a classifying unit for classifying said received sound according to N sound classes, with an integer N >= 2 (N.B.: although col. 2, lines 5-11 of E1 mentions analysing a sound signal in order to determine the actual acoustic situation,, for example "quiet surroundings", "conversation with noise disturbance" or "car journey", in connection with the prior art, the signal analysis referred to in paragraph [0030] does not explicitly refer to this type of analysis);

(ii) the characterizing data comprise "similarity factors (pl,...,pN) which are indicative of the similarity between said received sound (5) and sound representative of a respective class";

(iii) step c) is carried out "in dependence of said similarity factors".

1.5 "Similarity factors" are understood as values representing the similarity of the received sound signal to a respective sound class. In the art there appears to be no standard definition of how to measure similarity. This feature is therefore interpreted broadly to embrace any measure or indication of similarity.

1.6 With respect to claim 1, it is noted that the characterising data comprise similarity factors on which step c) is dependent, meaning that step c) is carried out on at least a first similarity factor indicating the similarity of the input signal with a first sound class and a second similarity factor indicating the similarity with a second sound class.

1.7 With respect to novelty, the appellant argues that E1 implicitly discloses the above-mentioned distinguishing features since they are implicitly comprised in a signal analysis which discriminates between different acoustic situations. In this respect, it is implicit that probabilities or similarities are taken into account in a classification, or that some value or other, e.g. of a frequency component, has to be measured in order to identify a sound class, the magnitude of this signal being in itself a similarity factor. Furthermore, a standard classifier outputs a binary signal indicating via a 1 or 0 whether each particular class of the plurality of classes is present. These binary outputs are also in the broadest sense similarity factors, since they are no different to similarities of 100% and 0%.

1.8 The board however finds these arguments unconvincing. A finding of lack of novelty has to rely on a direct and unambiguous disclosure. Features can only be regarded as implicit if the skilled person would immediately recognise that nothing other than what is alleged to be implicit could be present, which the board does not consider is the case here. There is a difference between what type of processing inherently must occur in the signal analyser of E1 and what might obviously occur, which is a matter to be considered in respect of inventive step.

1.9 The subject-matter of claim 1 is therefore new with respect to the disclosure of E1 (Articles 52(1) and 54 EPC).

1.10 The technical effect of the above-mentioned distinguishing features (i) to (iii) (cf. point 1.4 above) is that "mixed-mode" acoustic situations in which the signal has some similarity to more than one sound class can be taken account of in the determination of the correction data of step c), leading to a correction value which better represents the real acoustic situation (cf. paragraph [0088] ff. and Fig. 4 of the patent).

1.11 The problem to be solved is regarded as being to improve the method disclosed in E1 to better adapt the stored start value to the acoustic environment.

1.12 The appellant argues that the distinguishing features deemed to be novel with respect to E1 are in any case obvious since the signal analysis carried out in E1, if not implicitly, obviously would be based on similarity factors.

1.13 The board however also finds this argument unconvincing. The approach of E1 may suggest that the signal analyser discriminates which one of several acoustic situations best matches the actual situation (E1, paragraph [0030] and col. 2, lines 5 to 11). However, even if, for the sake of argument, this discrimination were based on first obtaining similarity factors, the analyser output (equivalent to the "sensor unit" output of claim 1) on which the adaptation of the stored volume start value depends would be a signal indicative of similarity with just one detected acoustic situation rather than consisting of a plurality of signals each indicative of similarity with a respective acoustic situation. Therefore, even if a classification in E1 were based on a measurement of similarities, or produced a binary output, the adaptation of the stored start value could not take a mixed acoustic situation into account.

1.14 The board notes further that although in such a scenario, the calculation of the stored start value would be indirectly dependent on more than one similarity factor, since these would have been involved earlier in the determination of the acoustic situation, a fair reading of step d) of claim 1 is that the step of deriving correction data in accordance with step c) must be carried out directly in dependence on more than one similarity factor, in order that a mixed acoustic situation can be taken into account. Neither E1 nor common general knowledge point the skilled person in this direction. The dependence of the correction data on more than one similarity factor is therefore not obvious.

1.15 The board concludes that the subject-matter of claim 1 involves an inventive step when starting out from E1 and taking account of common general knowledge (Articles 52(1) and 56 EPC).

1.16 In the impugned decision, the opposition division further considers the combination of E1 and E4 and comes to the conclusion that the subject-matter of claim 1 involves an inventive step with respect to this combination. Neither in the statement of grounds of appeal nor in its submission dated 19 February 2019, nor at the oral proceedings before the board did the appellant contest this conclusion.

1.17 The board has reviewed this finding and agrees with it. E4 discloses that a signal analyser 11 determines the acoustic situation and produces control signals 13 which are taken into account when adapting the hearing aid (cf. col. 5, lines 29-33). It is also stated that the signal analyser may use fuzzy logic (cf. col. 6, lines 19-23). However, neither the nature of the control signals 13 is clear, nor is it possible to infer that the fuzzy logic algorithm in the signal analyser 11 operates with signals which can be regarded as "similarity factors", or that more than one such similarity factor would be directly involved in updating a parameter. Consequently, the combination of E1 and E4 does not lead obviously to the subject-matter of claim 1.

2. Admissibility of document E8

2.1 The appellant argues that the subject-matter of claim 1 lacks an inventive step based on the combination of E1 and E8, starting out from E1 or alternatively from E8.

2.2 The respondent argues that E8 should not be admitted as being late-filed.

2.3 E8 was filed by the appellant for the first time with the statement of grounds of appeal and is thus late-filed within the meaning of Article 99 EPC. The admissibility of E8 therefore has to be considered. In accordance with Article 12(4) RPBA, the admitting of a new document into the appeal proceedings which could have been filed in the first instance proceedings is at the discretion of the board.

2.4 Considering that the distinguishing features over E1 are taken from dependent claim 8 of the patent, that the patent was opposed in its entirety, and that E8 was cited in the description of the patent application as filed and therefore should have been well-known to the opponent (cf. page 4, lines 10-13, of WO 2009/049672 A1), it is apparent that E8 could and should have been cited during the opposition procedure if the opponent had wished it to be taken into account. Hence, also in this respect, E8 was late-filed.

2.5 That notwithstanding, in accordance with case law, a late-filed document, as is the case here, may exceptionally be considered in appeal proceedings if it is prima facie highly relevant, i.e. highly likely to prejudice the maintenance of the patent (cf. T 1002/92, point 3.4 of the reasons). In the present case, the board does not consider that E8 is prima facie highly relevant for the following reasons.

2.6 Whilst it is true that E8 concerns an adaptive hearing system and provides a solution to the problem that a real acoustic situation may comprise a mix of standard acoustic situations, the board considers that E8 can only be combined with E1 with the benefit of hindsight.

2.7 In this respect, E8 concerns a rather complex method comprising two off-line processes before a real-time update of a parameter can take place (cf. paragraph [0013]). The disclosed method is based on the recognition (cf. paragraphs [0021] and [0022]) that an arbitrary audio signal can be considered as a superposition of generic feature vectors Vgi from which it is said that 95% of all audio signals can be constructed. These generic feature vectors are determined in an off-line process, following which the real-time adjustment is carried out, inter alia, by determining a weighting vector, i.e. a1, ..., an (cf. paragraph [0035]) comprising the weights of each generic signal Vgi.

2.8 Even if these weights were considered to be "similarity factors", the classification into unknown generic audio classes determined in an off-line procedure is conceptually quite different to a direct real-time classification of a real acoustic situation into, for example, one of those mentioned in E1, col. 2, lines 9-11. If the skilled person were to apply the teaching of E8 to E1, he would, rather implausibly, have to both introduce the above-mentioned off-line processing and fundamentally change the sound classification concept. It is further not clear whether the method of E8 is easily adaptable for updating a parameter such as the stored start value parameter of E1 which is influenced by abrupt volume changes performed by the user, or how the method of E8 would behave when the hearing aid is switched off and on.

2.9 The board therefore considers that the combination of E1 and E8 is highly unlikely to be prejudicial to inventive step of the claimed subject-matter.

2.10 E8 is therefore not admitted to the proceedings (Article 12(4) RPBA).

3. Independent claims 8 and 9 - inventive step

The above assessment given in connection with claim 1 applies, mutatis mutandis, to the other independent claims 8 and 9.

4. Conclusion

The board concludes that the subject-matter of each of the independent claims of the main request is new and involves an inventive step (Articles 52(1), 54 and 56 EPC). It follows that the appeal must be dismissed.

Order

For these reasons it is decided that:

The appeal is dismissed.

Quick Navigation