T 1019/07 () of 22.7.2010

European Case Law Identifier: ECLI:EP:BA:2010:T101907.20100722
Date of decision: 22 July 2010
Case number: T 1019/07
Application number: 01948546.5
IPC class: H05B 37/02
Language of proceedings: EN
Distribution: D
Download and more information:
Decision text in EN (PDF, 49 KB)
Documentation of the appeal procedure can be found in the Register
Bibliographic information is available in: EN
Versions: Unpublished
Title of application: Method and apparatus for controlling a lighting system in response to an audio input
Applicant name: Philips Solid-State Lighting Solutions, Inc.
Opponent name: -
Board: 3.4.03
Headnote: -
Relevant legal provisions:
European Patent Convention 1973 Art 54
European Patent Convention 1973 Art 56
European Patent Convention 1973 Art 111(1)
European Patent Convention 1973 Art 113(1)
European Patent Convention 1973 Art 114(1)
Keywords: Novelty (yes)
Inventive step (yes)
Reimbursement of the appeal fee (no)
Remittal for further prosecution
Catchwords:

-

Cited decisions:
-
Citing decisions:
-

Summary of Facts and Submissions

I. This is an appeal from the refusal of application 01 948 546 for the reason that the subject-matter of claim 1 lacked novelty (Article 54 EPC 1973).

II. At oral proceedings before the board, the appellant applicant requested that the decision under appeal be set aside and that a patent be granted on the basis of claim 1 of the new main request filed in the oral proceedings, and that the appeal fee be reimbursed.

III. The independent claim of this request reads as follows:

"1. An apparatus for executing a lighting program to control a plurality of light emitting diodes (LEDs) (40), the apparatus comprising:

a mapping table (2015t) to store a plurality of lighting programs;

an input to receive an audio signal (2003, 2005) in a digital format;

an audio decoder (2011) to digitally process the audio signal (2003, 2005) to determine and output at least one characteristic of the audio signal, wherein the audio decoder (2011) is configured to determine a beat of the signal based upon pulses within particular frequency bands of the audio signal, and the at least one characteristic of the audio signal relates to the beat; and

a mapper (2015), coupled to the audio decoder (2011) and to the mapping table (2015t), configured to execute a lighting program stored in the mapping table (2015t), to perform a mapping function from the at least one characteristic of the audio signal to generate control signals to control the plurality of LEDs;

wherein the lighting program includes one or more variable parameters that affect intensity and/or color of a lighting effect generated by the plurality of LEDs (40) in response to the control signals, and wherein

the mapper (2015) is configured, during the execution of the lighting program, to receive the output of the audio decoder (2011) to provide input values for the one or more variable parameters, to change the mapping function in response to the output received from the audio decoder (2011) and to generate the control signals based on the determined at least one characteristic of the audio signal."

IV. The following prior art documents are cited in this decision:

D1: EP 0 942 631 A

D11: GB 2 354 602 A

V. The examining division argued essentially as follows:

- The examination division recognized that although there were differences in terminology between document Dl1 and the apparatus defined by claim 1, these differences were not due to substantial technical differences. The apparatus defined by claim 1 had the same input (a digital audio signal) and the same output (control signals for lighting devices) as the apparatus of document Dl1 and the result was the same in both cases, namely a light show responsive to several aspects of the digital audio input signal (e.g. beat and amplitude in different frequency bands). From comparing figure 8 of the application with figure 1 of D11, it followed that in D11 the computer and its software comprised all the features of the mapper and most of the features of the audio decoder (in D11 the input device was external to the computer), but as all functionality of the mapper and audio decoder could easily be implemented in software or hardware or a combination thereof, it was not essential how the different functional blocks were schematically subdivided.

VI. The appellant applicant argued essentially as follows:

- In accordance with claim 1, the lighting program executed by the mapper was one of a plurality of lighting programs stored in a mapping table and included at least one variable parameter that affected an aspect of the lighting effect generated by the LEDs, the mapper being configured to provide input values for the at least one variable during execution of the lighting program based on a beat of the audio signal determined based upon pulses with particular frequency bands and received from the audio decoder during the execution of the lighting program.

- It was thus clear that the one or more variable parameters were included in the executable lighting program itself and that the input values were not provided before the execution of the one program started. Also, the input values could not be provided before the program had been selected amongst the plurality of programs stored in the table, i.e. the input values of claim 1 were not used in the selection, but for varying a program already selected and being executed. The provision of input values for one or more variable parameters of a lighting program being executed did not equal with a selection of a lighting program (or a part or module thereof) for execution, regardless the criteria based on which such selection might be done. Instead, in the present invention a lighting program including the required parameters had already been selected amongst the plurality of programs in the mapping table and the execution of the program had already been started by the mapper at the time when the provision of the input parameters for the variable parameters in the program took place based on the beat information received during the execution of the program.

- Figure 3 of Dl1 showed that the output of a timing analyser formed timing bus 25. As shown in Figure 4, the timing bus 25 and control bus 26 were fed to an act selector 30 to provide a means of selection of an act 31. The control bus 26 provided act selection instructions and the timing bus 25 provided timing information which was correlated by the act selector 30 with the act selection instructions from the control bus 26 to ensure that act selection was effected in time with the BPM or time indicator of audio sources. Although in Dl1 the timing analyser 20.1 could calculate a beat per minute (BPM) by analysing triggers in data bus 10 or analysing the information in spectral data bus 6, there was no direct and unambiguous disclosure that the BPM information was determined based on pulses within particular frequency bands, and even more importantly, that the BPM information was used to provide input values for at least one variable parameter of a lighting program from a mapping table during execution thereof and after the program had already been selected from the table. Instead, Dl1 gave a clear teaching of a selection of an act from a list of preset acts where the timing information could be used to synchronise the selection in time. However, Dl1 did not disclose or even suggest that the act selector would be configured to receive an output of an audio decoder during execution of a lighting program to provide input values for one or more variable parameters of the executable lighting program that affected an aspect of a lighting effect to generate control signals based on a determined beat of an audio signal during the execution of the already started lighting program. Instead, in Dl1 synchronisation of the selection of an act could be provided based on the timing information during generation of the program. It should be appreciated that the output of the act selection did not result in executable code, but several further stages of selection functions was still needed until the final stage of mood selection, and it was only after the final stage of mood selection when the execution of the generated lighting program could be started. Therefore Dl1 did not provide any explicit or implicit disclosure of a mapper providing input values for any parameters of the acts during execution of a selected act, as the execution could only start after the algorithm had gone through the different functional levels to create a program to be executed.

- Starting from Dl1, therefore, the current invention solved the problem of complexity in programming and achieving varying lighting effects based on audio input in a flexible manner. The present invention overcame the inconvenience of D11 based on the realisation that each lighting program stored in a mapping table did have variable parameters, and therefore an aspect of a lighting effect could be affected during the execution of the already selected lighting program by generation of input values for the one or more variable parameters based on the determined beat of the audio signal, thus avoiding the multistep compiling arrangement progressing from a selection on the highest level to further selections on the lower levels before obtaining the ultimate program as suggested by D11.

- The applicant also requested reimbursement of the appeal fee on the ground that the Examining Division made a substantial procedural violation in rejecting the main and first auxiliary requests.

- Document D11 was cited by the examining division only 24 hours before the oral proceedings. The representative received it in the lounge at Heathrow airport on 10 October 2006. Mr. B. of the applicant's company was already flying across the Atlantic Ocean to attend the oral proceedings so it was not feasible to request postponement of the oral proceedings. The reason given by the examining division justifying the late citation was submitted not to be correct because the amendments made on 5 October 2006 did not materially affect the patentability of the inventions claimed - they were addressing the many formal objections raised that were then not subsequently raised at the oral proceedings. Given the late citation of D11, the examining division had a heightened duty to rely on it properly, ie correctly to construe both the teaching of D11 and the claims of the application. This was not done. The applicant believed that the examining division convinced itself that the document was novelty destroying because it had found and cited it only a day before the oral proceedings, with the result that the examining division then failed properly to consider the submissions of the applicant made at the oral proceedings concerning the proper objective scope of both the claims of the present application and the actual disclosure of D11. Moreover, the discussions at the oral proceedings did not correlate to the grounds for the decision, contrary to Article 113 EPC. Still further, the comment of the chairman in paragraph 2.12 of the Minutes summarised the substantial procedural violation - the Examining Division looked at "functionalities" and failed to consider the claim language and the actual objective disclosure of D11 properly, and repudiated the need for a problem-solution approach to inventive step.

Reasons for the Decision

1. The appeal is admissible.

2. Claim 1

2.1 The examining division refused the application for the reason that the subject-matter of claim 1 lacked novelty over document D11. This document, however, discloses an apparatus for executing a lightning program to control a plurality of lighting devices 62 without specifying the nature of the light sources of the lighting devices (page 11, lines 3 to 12; Figure 1). Claim 1 on the other hand specifies that the apparatus controls a plurality of light emitting diodes (LEDs).

Notwithstanding any possible further differences between the subject-matter of claim 1 and that of D11, it has to be concluded that the apparatus according to claim 1 is new over D11 for this sole reason.

2.2 The apparatus of claim 1 is based on the embodiment of Figure 8, starting on page 32 "Controlling lightning systems in response to an audio input".

As disclosed there, the audio decoder 2011 can be implemented in hardware or software (page 35, lines 1 to 3). This is also true for many other components of the apparatus and, since this also applies to the apparatus of D11 (see Figure 1, computer 70 and software 80), a precise identification between the functional groups of the claim and those of the prior art is not always straightforward. Different names are used for software subroutines performing essentially the same task and the various tasks are grouped and divided differently between different subroutines. This was also recognized by the examining division (reasons, point 2.1.3 of the appealed decision).

2.3 In D11 the input device 3 is connected to input data bus 4 which is fed to the audio analysing section 5 of the software system 80. The input device 3 may have analog or digital data audio inputs (page 5, lines 21 to 30; Figure 1 and 2). The input device 3 and the audio analysing section 5 thus correspond to the input to receive an audio signal (2003, 2005) and to the audio decoder (2011) of claim 1, respectively.

2.4 The data from the input data bus 2 are analysed by a Fast Fourier Transform (FFT) spectral analyser and form the spectral data bus 6 (paragraph bridging pages 5 and 6). The spectral data bus 6 is passed to a plurality of trigger channels 7 which monitor specific frequency bands of the audio information and create triggers according to the function & parameter control 7.1 settings (page 6, lines 3 to 8; Figure 2). These triggers form data bus 10 which is analysed by timing analyser 20.1 to provide timing information to timing bus 25 (page 6, lines 31 to 32; Figure 3). The timing analyser 20.1 calculates inter alia the Beats per Minute (BPM) by analysing the triggers in data bus 10. Alternatively, the information in spectral data bus 6 can be used to calculate BPM (page 7, lines 1 to 4). The timing signals on the timing bus permit to maintain the synchronicity with the audio signals.

2.5 The appellant applicant argued that the audio decoder's output into the mapper (ie the input values for the one or more variable parameters that affect the lightning effect generated by the LEDs) related to the beat of the input audio signal, the beat being based upon pulses within particular frequency bands of the audio signal and that this feature was not disclosed in D11.

2.6 The board considers however that the beat of the audio signal is determined in D11 from the frequency domain intensity distribution supplied by the FFT. This distribution is supplied as intensity vs. frequency and grouped in corresponding frequency bands. The specification in claim 1 that "particular" bands are used for determining the beat of the signal is not a differentiating feature, since not all frequency bands are relevant for determining the beat (high frequency components are eg completely irrelevant for determining the beat) and a skilled person understands that only the relevant frequency bands are used in D11 for determining the beat.

The board concludes for these reasons that the features of the audio decoder 2011 of claim 1 are implicitly disclosed in D11.

2.7 Although the functional groups of D11 which correspond to the mapping table and to the mapper have not yet been identified, D11 discloses in the wording of claim 1:

An apparatus for executing a lighting program to control a plurality of lighting devices 62, the apparatus comprising:

an input 3 to receive an audio signal 1 in a digital format and

an audio decoder 5 to digitally process the audio signal to determine and output at least one characteristic of the audio signal, wherein the audio decoder is configured to determine a beat of the signal based upon pulses within particular frequency bands of the audio signal, and the at least one characteristic of the audio signal relates to the beat.

Let's turn now to the mapper and the mapping table.

2.8 According to claim 1, the mapper 2015 is coupled to the audio decoder 2011 and to the mapping table 2015t. The mapping table stores a plurality of lighting programs, one of it being executed by the mapper. The lighting programs include variable parameters affecting the intensity and/or color of a lighting effect generated by the LEDs.

The mapper

(a) performs a mapping function from the characteristic of the audio signal provided by the audio decoder (ie at least a signal relating to the beat) to generate control signals for the plurality of LEDs,

(b) generates from the output of the audio decoder input values for the variable parameters of the lighting program, and

(c) changes the mapping function in response to the output of the audio decoder.

The mapper has thus three roles once the lightning program to be executed has been selected.

2.9 According to D11, the lighting program is defined at different operational levels which are named sequentially "performance", "act", "scene" and "mood". The first and highest level of operational control is the "performance" 20 (page 6, lines 26 to 28; Figure 3). The stored "performances" 20(1), 20(2), ..., 20(n) can be equated to the mapping table of claim 1 which is defined as storing a plurality of lighting programs, in the case of D11 the "performances".

2.9.1 At the next hierarchical level of operation, the "act" 31, a plurality of texture detectors 31.1 and scene arrangers 35 arrange the next level of operation, the "scene" 41. The acts 31 create scene selection instructions in two ways based on data supplied by timing bus 25 and data bus 10 (page 8, lines 7 to 18; Figure 4).

2.9.2 The first method operates as follows: scene selectors 35 contains a list of available scenes 41 and a time scale against which certain scenes 41 may be instructed to apply. The period of time over which scenes 41 are selected by scene arranger 35 is measured in units directly related to BPM or time signature of audio sources 1 as calculated by timing analyser 20.1 and supplied by timing bus 25. The result is a set of scene change instructions created by scene arranger 35 and sent to scene bus 36, which are in time with the BPM of audio source 1 (page 8, lines 19 to 25).

2.9.3 The second method that acts 31 uses to create scene selection instructions is to autonomously and automatically select scenes according to the settings of texture detectors 31.1. Texture detectors 31.1 comprise a set of definable filters, functions and parameter settings that may be used to identify certain aspects of audio sources including eg beats, treble, middle or specific audio band activity, silence, crescendos or other musical nuances, and specific patterns or instrument characteristics (page 8, line 26 to page 9, line 3).

2.9.4 The next operational level comprises the "moods" 51 which are selected for each scene 41 by function & parameter controls 41.1 and mood arranger 41.2. Mood arranger 41.2 provides means for selecting, from a list of available options, which moods 51, are to be used for a particular scene 41, and sends mood selection instructions to mood bus 45. Function & parameter controls 41.1 provide a means of defining from a list of options, how a scene 41 will use the chosen moods. Examples of such options being: use a certain mood 51 every time scene 41(n) is applied, i.e. cycle, or use a certain mood 51 only the first time scene 41(n) is applied, i.e. one shot (page 9, lines 22 to 30; Figure 5).

2.9.5 Finally the "moods" 51 comprise function & parameter controls 51.1 and an output arranger 51.2 which provides means of arranging data to be sent to lightning devices 62 via output bus 55, output devices 60 and connections 61 (page 10, lines 5 to 7; Figure 6).

2.10 The appellant applicant argued that in D11 any use of beat information from the audio signal was either employed to establish a clock on timing bus 25 or to determine scene selection, but was not employed to provide input values for variable parameters in any scene or mood program. Moreover, the lighting program consisted in synchronizing pre-defined functions with the beat of the audio signal, but had no versatility during the execution of the program.

2.11 The board considers that these arguments apply to the first method of act selection disclosed in D11 (point 2.9.2 discussed above). However, in the second method the scenes are selected autonomously and automatically according to the settings of texture detectors 31.1 (ibid point 2.9.3). The texture detectors identify certain aspects of the audio signal and on this basis it is decided which scenes and moods are employed. This decision on which scenes and moods are used cannot be done autonomously and automatically, ie without any human intervention, without using variable parameters whose input values are defined when the audio signal is analyzed by the texture detectors.

For these reasons, the boards judges that functions (a) and (b) of the mapper mentioned in point 2.7 are found in the apparatus of D11 at the level of act 31 in texture detector 31.1 and scene arranger 35, namely to perform a mapping function from the characteristic of the audio signal provided by the audio decoder to generate control signals for the plurality of LEDs, and to generate from the output of the audio decoder input values for the variable parameters of the lighting program.

2.12 The apparatus of claim 1 therefore differs from the apparatus of D11 in that

(i) LEDs are used as lightning devices and that

(ii) the mapper changes the mapping function in response to the output received from the audio decoder.

Both features address different problems and may be treated separately.

2.13 Feature (i) addresses the issue of how to put in practice the teaching of D11 and involves the obvious solution of using LEDs; a technology which was available to the skilled person at the filing date of the application (see eg D1, [0015] and the present application, page 40, lines 8 to 10).

2.14 The board agrees with the appellant applicant that the objective problem addressed by feature (ii) can be formulated as how to increase the versatility of audio control of the lighting program. This problem was also addressed in the original application by stating that existing programs have limited functionality with respect to the visualization of music (page 6, lines 9 to 10).

The application discloses that the mapping function can be selected eg by including a variable in a single mapping function that can result in changes of the mapping output or by switching between different mapping functions in the mapping table (page 38, lines 5 to 25).

2.15 As mentioned previously (point 2.9) the mapping table of the present application can be equated to the set of "performances" 20(1) ... 20(n) of D11. The texture detector 31.1 and the scene arranger 35 found at the next operational level, the "act", fulfil the function of the mapper by which input values are created from the analysis of the audio signal for the variable parameters of the lighting program, since at this level the selection of "scenes" and "moods" is done autonomously and automatically according to the settings of texture detectors 31.1. However, no change in the selected mapping function, neither in response to an external or an internal signal is suggested. In particular, the function & parameter controls 41.1 and the mood arranger 41.2 select form a list of options the moods to be used for a specific scene. These options are illustrated in D11 as eg repeatedly using a mood (cycling) or using a mood only once (one-shot) (page 9, lines 22 to 30). These options, however, do not change the mapping function, but maintain the same mapping function being used.

Also the options available to the function & parameter controls 51.1 and to the output arranger 51.2 at the "mood" operational level do not include a change in the mapping function, but are fixed functions that can be used and combined when defining the particular mood, but remain fixed after that (page 10, line 5 to page 11, line 2; Figure 6).

Changing the mapping function during execution of a lighting program therefore adds a further degree of versatility to the apparatus for executing a lightning program which is not derivable from the disclosure of document D11 or from the other available prior art documents.

2.16 Consequently, the board finds that the apparatus for executing a lightning program of claim 1 is new and involves an inventive step.

3. Reimbursement of the appeal fee

3.1 The appellant argued that the examining division committed a substantial procedural violation in rejecting the main and first auxiliary requests at the oral proceedings before the examining division on the basis of document D11.

3.2 The minutes of the oral proceedings before the examining division state that document D11 was sent by fax on the morning of 10 October 2006, ie one day before the oral proceedings held on 11 October 2006 (point 1.3.1). The appellant alleges that due to the late introduction of D11 he could not request postponement of the oral proceedings.

3.3 The board, however, cannot recognize that this request could not have been submitted at the oral proceedings, since according to the minutes (which the board assumes reproduce correctly the course of the oral proceedings, as they were not contested by the appellant) the chairman asked the representative whether he could comment on D11 or whether he needed further time to consider it (point 1.3). Had the representative requested a postponement of the oral proceedings then and there, the examining division would have had the obligation to give a reasoned decision on this request. Absent such a request no procedural violation occurred by not allowing it.

3.4 According to the decision, the late introduction of document D11 was prompted by the applicant's submission three days before the appointed oral proceedings of five new requests claiming for the first time features taken from the description (decision under appeal, points 1.11 and 1.12).

The appellant applicant alleged that the new claim requests contained amendments that did not require the introduction of a new document, as they only addressed formal objections which were not addressed afterwards during the oral proceedings.

However, irrespective whether this assessment is correct or not, the examining division has the right and even the duty to introduce of its own motion at any time any facts, evidence and arguments it becomes aware of (Article 114(1) EPC), of course with due regard to the party's right to be heard (Article 113(1) EPC), ie by giving the parties the time required for addressing these facts, evidence and arguments.

3.5 The appellant further argues that the examining division did not construe correctly the teaching of D11 and that of the claims. However, an incorrect interpretation of the claims or of the state of the art, if such misinterpretation occurred, does not constitute a procedural violation, but is an error of judgement.

3.6 Although the comments at point 2.12 of the minutes conflating the concepts of novelty and inventive step may be disputable, that at worst may reflect an error of judgement, but does not constitute a procedural violation. These statements, moreover, may be understood as meaning that the examining division did not recognize any substantial difference between the claims and the state of the art and, if there were any minor differences, they did not involve an inventive step.

3.7 Finally, the appellant alleges that the discussion at the oral proceedings did not correlate to the grounds for the decision and that, therefore, his right to be heard was violated. It is the established case law of the boards that a violation of the right to be heard is a substantial procedural violation. However, an unsubstantiated allegation without specifying the facts how this right was violated does not allow the board to decide in the appellant's favour.

3.8 The board judges, for these reasons, that, since the appellant has not shown the occurrence of a substantial procedural violation, the request for reimbursement of the appeal fee is refused.

4. The board has addressed the reason for refusing the present application, ie lack of novelty and inventiveness of the apparatus of claim 1. It still remains to be assessed whether the dependent claims are consistent with amended claim 1 and to adapt the description to the claims. The board uses therefore the discretion conferred to it by Article 111(1) EPC and decides to remit the case to the department of first instance for further prosecution.

ORDER

For these reasons it is decided that:

1. The decision under appeal is set aside.

2. The case is remitted to the department of first instance for further prosecution.

3. The request for reimbursement of the appeal fee is refused.

Quick Navigation