European Case Law Identifier: | ECLI:EP:BA:2025:T028524.20250310 | ||||||||
---|---|---|---|---|---|---|---|---|---|
Date of decision: | 10 March 2025 | ||||||||
Case number: | T 0285/24 | ||||||||
Application number: | 18937166.9 | ||||||||
IPC class: | G06F 16/00 G06F 16/38 |
||||||||
Language of proceedings: | EN | ||||||||
Distribution: | D | ||||||||
Download and more information: |
|
||||||||
Title of application: | Information display method and device | ||||||||
Applicant name: | Huawei Technologies Co., Ltd. | ||||||||
Opponent name: | - | ||||||||
Board: | 3.5.07 | ||||||||
Headnote: | - | ||||||||
Relevant legal provisions: |
|
||||||||
Keywords: | Amendments - main request, first and second auxiliary requests Amendments - added subject-matter (yes) Amendment to appeal case - third, fourth, eighth and ninth auxiliary requests Amendment to appeal case - taken into account (no) Amendment after notification of Art. 15(1) RPBA communication - fifth to seventh auxiliary requests Amendment after notification of Art. 15(1) RPBA communication - exceptional circumstances (yes) Inventive step - fifth to seventh auxiliary requests (no) |
||||||||
Catchwords: |
- |
||||||||
Cited decisions: |
|
||||||||
Citing decisions: |
|
Summary of Facts and Submissions
I. The appeal lies from the decision of the examining division to refuse European patent application No. 18937166.9.
II. The summons to oral proceedings before the examining division, to be held on 29 September 2023, had been sent out on 22 December 2022. With its letter of 28 August 2023, the appellant submitted a main request and first and second auxiliary requests. The examining division gave its preliminary opinion on the three requests in a brief communication of 20 September 2023. After the appellant's announcement on 26 September 2023 that it would not attend the oral proceedings, the examining division cancelled the oral proceedings and issued the decision in writing.
III. The following prior-art document was cited in the decision under appeal:
D4: WO 2018/004730 A1, 4 January 2018.
IV. The examining division decided that the subject-matter of the independent claims of the main request and first and second auxiliary requests lacked inventive step over prior-art document D4. The examining division was further of the opinion that the subject-matter of the dependent claims of the main request did not appear inventive either.
V. In the statement of grounds of appeal, the appellant maintained the requests considered in the decision under appeal and filed claims according to third and fourth auxiliary requests and amended description pages. The appellant requested that the decision under appeal be set aside and that a patent be granted on the basis of the main request or one of the first to fourth auxiliary requests.
VI. In a communication accompanying a summons to oral proceedings, the board expressed its preliminary opinion that the subject-matter of claim 1 of the main request and first and second auxiliary requests added subject-matter with respect to the application as originally filed and was not inventive over the disclosure of document D4. The third and fourth auxiliary requests were considered inadmissible.
VII. With a letter of reply the appellant filed new claims according to fifth to ninth auxiliary requests.
VIII. Oral proceedings were held as scheduled. At the end of the oral proceedings, the Chair announced the board's decision.
IX. The appellant's final requests were that the decision under appeal be set aside and that a patent be granted on the basis of one of the main request, the first and second auxiliary requests considered in the decision under appeal, the third and fourth auxiliary requests filed with the statement of grounds of appeal, or the fifth to ninth auxiliary requests filed with letter of 23 January 2025 in reply to the board's communication.
X. Claim 1 of the main request reads as follows (itemisation added by the board):
A |"An information display method, applied to a terminal, wherein the method comprises: |
B |displaying a first user interface; |
C |obtaining a first operation entered by a user; |
D |in response to the first operation, |
D1| performing text recognition on content displayed on a display screen and word segmentation on the recognized texts to obtain word segmentations, |
D2| determining at least one key character accordingto the word segmentations and |
D3| obtaining characteristic information associated with the at least one key character; and |
E |displaying target information, |
E1| where the target information is information that is in a set of information associated with the at least one key character and that is associated with the characteristic information, |
E2| the characteristic information is a current application scenario information of the terminal; |
F |wherein the displaying target information, where the target information is information that is in a set of information associated with the at least one key character and that is associated with the characteristic information comprises:|
F1| if the electronic device is in a motion scenario, displaying motion field information associated with the at least one key character; |
F2| if the electronic device is in a driving scenario or music playing scenario, displaying music-related information associated with the at least one key character." |
XI. Claim 1 of the first auxiliary request differs from claim 1 of the main request in that the text ", determining at least one key character according to the word segmentations and" has been replaced with the text
"; displaying some or all of the word segmentations on a word segmentation interface;
selecting a key character from the word segmentations displayed based on a second operation;".
XII. Claim 1 of the second auxiliary request differs from claim 1 of the first auxiliary request in that after the text "selecting a key character from the word segmentations displayed based on a second operation;" the following text has been added:
"displaying a second virtual key on the word segmentation interface, wherein the second virtual key comprises one or more of: search, copy, share, or more options;".
XIII. Claim 1 of the third auxiliary request differs from claim 1 of the second auxiliary request in that after the text E2 above, the following text has been added
"and wherein the characteristic information further comprises a search record associated with the key character;"
and in that after the text F above the following text has been added:
"displaying an index of content that is in the search record and that is browsed by the user; and".
XIV. Claim 1 of the fourth auxiliary request differs from claim 1 of the third auxiliary request in that the text "displaying motion field information" has been replaced with the text
"the index of content is corresponding to motion field information"
and in that the text "displaying music-related information" has been replaced with the text
"the index of content is corresponding to music-related information".
XV. Claim 1 of the fifth auxiliary request adds to claim 1 of the main request the following text after the text C above
", wherein the first operation is a screenshot operation".
XVI. Claim 1 of each of the sixth to ninth auxiliary requests adds to claim 1 of each of the first to fourth auxiliary requests the text
", wherein the first operation is a screenshot operation"
after the text C above and the text
"that the user performs on a word segmentation location on the touchscreen"
after the text "the word segmentations displayed based on a second operation".
Reasons for the Decision
Application
1. The invention concerns an information display method and device. When a user performs a first operation, e.g. manual or voice input or taking a screenshot (see page 1, line 24 to page 2, line 4; page 15, line 23 to page 16, line 12 of the translation of the original description) the method according to the described invention determines a "key character" based on the first operation, determines "characteristic information" associated with the key character and displays "target information" associated with the key character and characteristic information (page 1, line 24 to page 2, line 4 and Figure 3).
1.1 The key character may be text entered by the user or recognised by the system using optical character recognition (OCR). The characteristic information may be for instance a "method of contact" including a phone number or email address. The target information displayed on the screen may be a method of contact, in which case a "first information" may also be displayed asking the user whether to dial the phone number (page 2, lines 15 to 29; page 15, line 26 to page 16, line 12).
1.2 After recognising the text using OCR, the device of the invention may segment the text, extract words from the segments as key characters and display the result in a "word segmentation interface" which prompts the user to select a key character. The word segmentation interface may further include a "virtual key" indicating for instance the options "search", "copy" and "share" for selection by the user (page 16, lines 9 to 30, Figures 2B-1 and 2B-2).
1.3 The displayed target information may also include an "index of content previously browsed by the user" such as a directory, a page number or a website previously browsed by the user. According to the description, if the user has searched for the key character before, the user is more likely to browse a previous search result again (page 23, lines 11 to 20).
1.4 The application describes different "manners" of setting the characteristic information (page 19, lines 1 to 4). According to the fifth manner disclosed starting from page 24, line 28, the characteristic information is "current application scenario information", for example a "driving scenario" indicating that the user is driving, a "motion scenario" or a "text chat" scenario. If a user is in a motion scenario, the "motion field information", for instance game news or athlete information, associated with the key character may be used as the target information (page 25, lines 10 to 14).
Main request
2. Added subject-matter
2.1 The embodiments involving text recognition and word segmentation are disclosed on page 16, lines 9 to 30; page 18, lines 4 to 14; page 21, lines 21 to 25 and in Figure 5 of the application as filed. In each of these passages, the method includes a step of making a screenshot as the first operation by the user. However, claim 1 does not specify a step of making a screenshot.
2.2 The appellant submitted that the passage on page 18, lines 5 to 7, disclosed performing text recognition as specified in feature D1 and in the text "the first operation may be a screenshot operation of the user" explicitly mentioned the screenshot operation as an optional feature in the context of text recognition and word segmentation.
The disclosure of the text recognition on page 1, lines 9 and 10, was not an explicit embodiment of the invention itself and also referred explicitly to an "example".
Moreover, the features relating to text recognition and a screenshot operation were not inextricably linked with each other. According to the case law of the Boards of Appeal in this case an intermediate generalisation was allowed. Text recognition and word segmentation could be performed based on any "first operation entered by a user", irrespective of whether the first operation was e.g. manual or voice input or taking a screenshot. The appellant cited page 1, line 24 to page 2, line 4; page 10, lines 27 to 28; page 15, line 23 to page 16, line 12 of the English translation of the application as filed.
2.3 The board agrees that page 1, lines 9 and 10 is part of the background section and cannot serve as a basis for the claimed subject-matter. In any case, it does not disclose the text recognition without the screenshot and only describes as optional the combination of screenshot and text recognition.
2.4 The passage on page 18, lines 4 to 7, including the text cited by the appellant (see point 2.2 above), reads as follows:
"The user may also trigger a screenshot operation, so that the electronic device 100 obtains the captured key character. In this case, the first operation may be a screenshot operation of the user, and then the electronic device 100 performs text recognition on content displayed on the display screen 194, to determine the key character based on the recognized text."
In the board's opinion, this passage discloses indeed that it is optional to perform a screenshot as the first operation. However, the expression "and then" refers to the first operation being a screenshot and hence the sentence "and then the electronic device 100 performs text recognition ..." discloses the text recognition step only in combination with the first operation being a screenshot operation. Therefore, this passage does not disclose the text recognition and word segmentation steps of claim 1 being performed in isolation, i.e. without a screenshot operation.
2.5 According to the appellant, the first operation could be any operation, e.g. manual or voice input or taking a screenshot, as disclosed on page 10, lines 27 and 28 and page 15, line 23 to page 16, line 12. The board notes, however, that the step of performing text recognition of claim 1 is specifically restricted to recognising content displayed on a display screen in response to the first operation (see features D and D1). The passages cited by the appellant mention voice input and speech recognition but not in combination with text recognition of displayed content.
2.6 Furthermore, without further explanation and in the absence of a screenshot, it is indiscernible why, from a technical point of view, text recognition of displayed content would be necessary after speech recognition of the voice input. Similarly, without further explanation it is unclear why text recognition would be necessary in the context of a normal interaction with the graphical user interface (GUI). In a typical GUI the system knows where the GUI elements (buttons, menus, etc) are on the screen and does not have to recognise text on the displayed content in order to identify those elements. No such further explanations are provided in the application. The text recognition of displayed content is only disclosed in the present application in a specific context in which a screenshot is used for extending the GUI functionality (see e.g. page 16, lines 9 to 21 and Figures 2B-1, 2B-2). Therefore, the appellant's argument that the first operation being a screenshot and the text recognition are not inextricably linked with each other is not convincing.
2.7 In view of the above, claim 1 of the main request claims an unallowable undisclosed intermediate generalisation and does not fulfil the requirements of Article 123(2) EPC.
First and second auxiliary requests
3. Added subject-matter
3.1 The amendments introduced by claim 1 of the first and second auxiliary requests (see items XI. and XII. above) do not overcome the objection for added subject-matter raised above against claim 1 of the main request.
3.2 Therefore, for the same reasons as given for the main request, the first and second auxiliary requests do not fulfil the requirements of Article 123(2) EPC.
Third and fourth auxiliary requests
4. Claim 1 of the third auxiliary request differs from claim 1 of the second auxiliary request in that it adds the following features:
J the characteristic information further comprises a search record associated with the key character;
K displaying an index of content that is in the search record and that is browsed by the user.
5. Claim 1 of the fourth auxiliary request differs from claim 1 of the third auxiliary request in that:
- "displaying motion field information" in feature F1 has been replaced with "the index of content is corresponding to motion field information"
- "displaying music-related information" in feature F2 has been replaced with "the index of content is corresponding to music-related information".
6. Admissibility
6.1 The third and fourth auxiliary requests have been newly submitted with the grounds of appeal (see section V. above). They are amendments to the party's appeal case which may be admitted only at the discretion of the board (Article 12(4) RPBA).
6.2 The appellant argued that the amendments introduced by the third and fourth auxiliary requests served to overcome the objections raised in the decision under appeal. The amendments introduced by the third auxiliary request were based on a dependent claim previously considered and searched by the examining division. Claim 1 of the fourth auxiliary request merely clarified that "if the device is in its respective scenario, it is the index of content that is corresponding to the respective information". Since the examining division and the board were of the opinion that features F1 and F2 were novel over D4, further clarifying those features would not have an impact on the search results because the more general features were not found. The procedural economy of the appeal proceedings would thus not be affected by considering these two requests.
The appellant further submitted that the representative had not had sufficient time to consult with the client and together prepare these requests in the examination proceedings. The preliminary opinion of the examining division was dated 20 September 2023, whilst the oral proceedings were scheduled for 29 September 2023.
6.3 The board does not find these arguments convincing. Features J and K relate to a search record and a content index, which were not mentioned in claim 1 of the higher ranking requests on which the decision under appeal was based. These amendments introduced by claim 1 of the third auxiliary request are not minor clarifications. Compared to the subject-matter on which the decision under appeal was based and which is part of the appeal case pursuant to Article 12(2) RPBA, claim 1 of the fourth auxiliary request introduces not only the amendments to features F1 and F2 but also the amendments introduced by claim 1 of the third auxiliary request, which are not minor clarifications.
6.4 With regard to the question of whether there has been sufficient time to file these auxiliary requests in the examination proceedings, the board notes that the communication of 20 September 2023 was not a preliminary opinion accompanying the summons to oral proceedings but a reply to the new requests submitted on 28 August 2023 after the summons to oral proceedings of 22 December 2022. The communication of 20 September 2023 was thus dealing with these new requests submitted after the summons.
Furthermore, the appellant was not confronted, in the communication of 20 September 2023, let alone in the decision under appeal, with fresh objections which could have been raised before. In the communication of 20 September 2023, the examining division raised inventive-step objections based on document D4 similar to those raised in the preliminary opinion accompanying the summons of 22 December 2022.
Under these circumstances, the examining division was not obliged to provide, in advance of the oral proceedings, a further preliminary opinion on the newly filed requests, not to mention give the appellant more time to prepare further requests.
In reply to the examining division's preliminary opinion, the appellant chose not to file any further amendments and announced that it would not attend the oral proceedings. The amendments introduced by the third and fourth auxiliary requests thus could and should have been submitted during the examination proceedings.
6.5 At the oral proceedings before the board, the appellant also argued that the board had raised new objections under Article 123(2) EPC and had changed the inventive-step argumentation.
6.6 The board notes, however, that the third and fourth auxiliary requests were filed with the grounds of appeal and were intended to overcome the inventive-step objection of the decision under appeal. The inventive-step reasoning of the board's communication pursuant to Article 15(1) RPBA is based on the same starting point as the decision under appeal and does not represent a major shift from the reasoning of the decision under appeal.
6.7 In view of this, the third and fourth auxiliary requests are not admitted into the proceedings (Article 12(4) and (6) RPBA).
Fifth to ninth auxiliary requests
7. Admissibility
7.1 The amendments introduced by claim 1 of each of the fifth to ninth auxiliary requests address objections for added subject-matter raised for the first time by the board in its communication pursuant to Article 15(1) RPBA.
7.2 The board recognises that this constitutes an exceptional circumstance under Article 13(2) RPBA justifying admitting the fifth to seventh auxiliary requests into the appeal proceedings. Therefore, the fifth to seventh auxiliary requests are admitted into the proceedings.
7.3 The eighth and ninth auxiliary requests were based on the third and fourth auxiliary requests, which were not admitted into the proceedings. Under normal circumstances, a further amendment of an inadmissible request is also inadmissible. The exceptional circumstances recognised by the board for the fifth to seventh auxiliary requests are thus not valid for the eighth and ninth auxiliary requests. The board cannot identify any other special reason for admitting these two auxiliary requests.
Therefore, for the reasons given for the third and fourth auxiliary requests mutatis mutandis, the eighth and ninth auxiliary requests are not admitted into the proceedings (Article 12(4) and 13(1) RPBA).
Fifth auxiliary request
8. Inventive step
8.1 Document D4 discloses a system for providing content selection techniques within a user interface of a user device. The user interface can include a plurality of objects, such as images or text. An input by a user can indicate the selection of an object. The user device determines a content attribute associated with the selected object, where the content attribute can identify one or more characters or word represented by the selected object. The user device can then determine a content entity based at least in part on the determined content attribute and relevant actions based at least in part on the content entity. The relevant actions are displayed within the user interface (paragraph [0017]).
8.2 It follows from the above that the content attribute, the content entity and the relevant actions of D4 correspond respectively to the key character, characteristic information and target information of claim 1.
Therefore, document D4 discloses a method including features A and B, determining at least one key character, and features E and E1.
8.3 In the system of D4, obtaining the content attribute of a selected object can include capturing a screenshot of the data displayed, the data including a plurality of objects, and recognising the text by means of OCR (paragraph [0019]). The system can identify one or more displayed objects and the content entity associated with the one or more objects selected by the user (paragraphs [0019], [0027], [0028] and [0030]).
In the board's opinion, text segmentation, including word segmentation, is part of a process of text recognition and is thus implicitly disclosed in D4.
Therefore, document D4 also discloses features C, D, D1, D2 and D3 and the additional features of claim 1 of the fifth auxiliary request with respect to claim 1 of the main request, i.e. that the first operation is a screenshot operation.
8.4 Once the content entity has been determined in the system of D4, one or more relevant actions associated with the content entity can be determined and displayed. The relevant actions can be determined based at least in part on a determined context associated with the user interface, selected object and/or the content entity. For instance, if the content entity is located within a web browser, the relevant actions can include a web search of the content entity using a suitable search engine. The relevant actions are then displayed, for instance in a menu of the user interface (paragraph [0022] and [0024]).
Therefore, document D4 also discloses features E2 and F.
8.5 The subject-matter of claim 1 differs from the information display method of document D4 in that it includes the following features:
F1 if the electronic device is in a motion scenario, displaying motion field information associated with the at least one key character;
F2 if the electronic device is in a driving scenario or music playing scenario, displaying music-related information associated with the at least one key character.
This was not contested by the appellant.
8.6 Regarding inventive step of the subject-matter of claim 1 of the fifth auxiliary request, the appellant essentially made reference to the arguments provided for the main request. The appellant remarked that the "motion scenario" and the "driving scenario or music playing scenario" were not considered at all in D4 as a further determination level for information to be displayed. Being able to consider such different scenarios, however, did not merely relate to providing specific cognitive content, as had been alleged by the examining division, but contributed to the technical effect of coping with physically limited display space. That this technical effect was to be considered as contributing to the technical character of an invention and, hence, for an inventive step analysis had already been acknowledged in decision T 928/03, Reasons 5.3. This also applied to the present claim, which specified an interactive user interface. Distinguishing features F1 and F2, which concerned different technical scenarios of the electronic device, were even more technical than the features discussed in reason 5.3 of T 928/03, which related to computer-game information.
The appellant argued that in the present case, the technical effect resulted from an additional filtering of the information to be displayed based on different scenarios, thereby reducing the displayed content and overcoming the physically limited display space. This technical effect was not relying on a user preference and was not a mathematical method. In addition, the invention also contributed to improve accuracy for displaying and recommending information. Distinguishing features F1 and F2 thus achieved the technical effect of overcoming the physically limited display space, thereby improving accuracy for displaying and recommending information and improving recommendation efficiency and technical accessibility.
8.7 The appellant further noted that features A and B and distinguishing features F1 and F2 of claim 1 defined at least implicitly a display of a terminal. As was well known, a display of a terminal device was finite, and therefore limited. In order to recognise the technical effect it was not required for the claim to define itself this physical limitation of the display or a specific way of displaying information.
According to the appellant, the distinguishing features F1 and F2 differentiated between technical scenarios, a motion scenario and a driving or music playing scenario. A user interaction in features F1 and F2 was not required in order to achieve the technical effect of using the available limited display space more efficiently by distinguishing between different technical scenarios.
8.8 The board notes that claim 1 does not provide details about the way the motion field information and music-related information are retrieved and displayed and does not specify how the motion scenario and driving scenario or music playing scenario are determined or detected. The board can thus not recognise the scenarios as "technical scenarios", as argued by the appellant.
The two distinguishing features merely specify which type of information is displayed in different scenarios and thus relate to presentation of information. Such subject-matter avoids the exclusion under Article 52(2) and (3) EPC only if it interacts with technical features of the claimed invention to produce a technical effect.
8.9 In decision T 928/03, the invention is a guide display device for use in a video game system. The three distinguishing features concern the display of a ring-shaped guide mark on the field plane, including the manner of displaying a "pass guide mark" to indicate on the display area a player in the same team to whom the "game medium" (e.g. the ball) can be passed (see Facts II and Reasons 3.2). The board in T 928/03 finds that the case at hand is different from a case in which the overall effect is exclusively an intellectual effect on a human being because the guide mark serves a technical purpose (visibility) and is not just displayed for the sake of viewing but for enabling a continued man-machine interaction (Reasons 4.1.1). The geometric shape of the guide mark does not make a technical contribution (Reasons 4.1.2), but the manner of displaying the pass guide mark is technical (Reasons 4.2 and 4.3). The third feature, which specifies that a portion of the pass guide mark is displayed on the end of the display area even when the player character comes out of the display area so as to properly indicate the direction in which to pass the game medium, is considered to address "the conflicting technical requirements of displaying an enlarged portion of an image (into which the user may have zoomed) and keeping an overview of a zone of interest which is larger than the display area" (Reasons 5.3).
8.10 Contrary to the appellant's arguments, the conclusions of decision T 928/03 do not apply to the present case. The information displayed according to features F1 and F2 is not further used in the claimed method and does not enable a continued man-machine interaction. The display steps of distinguishing features F1 and F2 serve only the non-technical purpose of providing information to the user. While the method of claim 1 concerns a method for displaying information in a physical display of a terminal and each physical display has a limited display area, as argued by the appellant, features F1 and F2 are not a solution to the problem of overcoming the physically limited display space. Claim 1 does not restrict the information displayed in a specific scenario and features F1 and F2 do not explain how the information is displayed, for example taking into account the area available in the display. They do not reflect any considerations taking into account technical characteristics of the screen.
8.11 According to established case law, displaying information according to the user's preferences or informational needs are not technical effects (T 598/14, Reasons 2.4, T 1741/08, Reasons 2.1.6, T 1526/19, Reasons 2.6). In line with this case law, displaying information depending on an abstract scenario is not a technical effect either.
8.12 The board can thus not recognise the technical effects alleged by the appellant of improving accuracy for displaying and recommending information and improving recommendation efficiency and technical accessibility. The distinguishing features F1 and F2 concern presentation of information as such and do not contribute to an inventive step.
8.13 Therefore, the subject-matter of claim 1 of the fifth auxiliary request is not inventive (Article 56 EPC).
Sixth auxiliary request
9. Claim 1 of the sixth auxiliary request differs from claim 1 of the fifth auxiliary request in that feature D2 has been replaced with the following features
G "displaying some or all of the word segmentations on a word segmentation interface";
H "selecting a key character from the word segmentations displayed based on a second operation that the user performs on a word segmentation location on the touchscreen".
10. Inventive step
10.1 With regard to inventive step of claim 1 of the sixth auxiliary request, the appellant essentially referred to the arguments provided for the first auxiliary request.
The appellant argued that document D4 did not disclose features G and H. Paragraphs [0027] to [0029], [0032] and [0046] to [0051], and Figure 2, related to the selection of a kind of object, after it had been selected by a user. Document D4 did not disclose displaying the word segmentations on a word segmentation interface, i.e. that the word segmentations were obtained by performing text recognition on displayed content after a first operation, as specified in feature D1. In document D4 a selected text was only analysed after the user had selected it.
The distinguishing features of claim 1 of the sixth auxiliary request further improved efficiency and technical accessibility for a user on the limited display.
10.2 The system of document D4 can recognise more than one object, including text objects, depicted on the user interface and obtain the respective content attributes ("key characters" in the language of claim 1). It can also identify one of those objects being selected by the user (paragraphs [00027] to [0029] and [0032]). For example, in the user interface of Figure 2 of document D4, both objects 206 and 208 can be recognised by the system, displayed again as recognised objects (as specified in feature G) and selected by the user (paragraphs [0046] and [0051]). This selection, for example after the OCR operation, constitutes a "second operation" within the meaning of claim 1, as specified in feature H.
The appellant's reasoning contradicts the board's analysis of document D4 and claim 1. For the reasons given in the following, the board does not find the appellant's arguments convincing.
10.2.1 The board does not agree with the appellant's argument that document D4 does not disclose that the word segmentations are obtained by performing text recognition on displayed content after a first operation, as specified in feature D1. It is clear for instance from paragraphs [0019] and [0028] that the OCR operations are performed after the screenshot operation on displayed content, which corresponds to the first operation.
10.2.2 The board also disagrees with the appellant that document D4 does not disclose displaying the word segmentations on a "word segmentation interface". The feature "word segmentation interface" is not further defined in claim 1, and is merely an interface on which results of the word segmentation are displayed. In view of this, the board does not recognise a difference between the "word segmentation interface" of claim 1 and the system interface of document D4, on which the results of the OCR operation are displayed, for example the interface depicted in Figure 3 or the interface of Figure 4 with buttons such as "TRANSLATE", "SEARCH" and "COPY".
10.2.3 The board does not find convincing the appellant's arguments that in document D4 a selected text is only analysed after the user has selected it and that document D4 does not disclose features G and H because it discloses only the selection of a "kind of object" after it has been selected by a user.
In some embodiments of document D4 an initial "content selection" step takes place before the OCR operation. Such an embodiment is referred to in the passage of paragraph [0032] cited by the appellant: "For instance, in implementations wherein the user input is provided through user interaction with the content selection element, the location of the drop point of the content selection element by the user can be analyzed to determine a corresponding object". However, this is only an optional feature. In multiple embodiments disclosed in document D4, the OCR operation is not limited to the object close to the "drop point of the content selection element" and the system identifies multiple objects in the screenshot image (see e.g. paragraphs [00027] to [0029], [0032] and [0046] and Figures 2 and 3).
Furthermore, a method including an initial operation of "content selection", even if performed before the first operation, is not outside the scope of claim 1.
The board further notes that paragraphs [0062] to [0066] and Figure 6 disclose embodiments in which an initial "content selection" is optional and which include steps of capturing a screenshot of the user interface, determining objects within the user interface, and identifying a selected object based in part on the user input. In the board's opinion, the skilled person thus understands from document D4 that the user is able to select one of the multiple objects identified after the OCR operation, also in the embodiments of the passages cited above.
10.3 Therefore, features G and H are known from document D4 and the distinguishing features are the same as for the fifth auxiliary request (features F1 and F2, see point 8.5 above). For the reasons given for the fifth auxiliary request, claim 1 of the sixth auxiliary request does not fulfil the requirements of Article 56 EPC.
Seventh auxiliary request
11. Claim 1 of the seventh auxiliary request differs from claim 1 of the sixth auxiliary request in that it specifies the following features after feature H:
I displaying a second virtual key on the word segmentation interface, wherein the second virtual key comprises one or more of: search, copy, share, or more options.
12. Inventive step
12.1 With regard to feature I, the appellant argued that document D4 taught that relevant actions were determined in response to analysing one particular selected object. It did not appear to be disclosed that a virtual key such as the second virtual key of feature I was displayed on a word segmentation interface itself.
The distinguishing features of claim 1 of the seventh auxiliary request further improved efficiency and technical accessibility for a user on the limited display. Providing the technical possibility to perform actions such as search, copy, share, etc. directly in the word segmentation interface reduced efforts of handling such actions significantly. Providing the second virtual key on the word segmentation interface thus contributed to the technical character.
12.2 The board does not find the appellant's arguments convincing.
Document D4 discloses that relevant actions are displayed based on the content entity corresponding to the object selected by the user. One of the relevant actions may be "copy" (paragraphs [0038] to [0042] and [0049], Figures 2 and 4). For the reasons given under point 10.2.2 above, the board is of the opinion that the interface depicted in Figure 4 corresponds to a "word segmentation interface" of claim 1. The buttons of Figure 4, including the "COPY" button, are displayed after the OCR operation. Therefore, feature I is also known from document D4.
12.3 The distinguishing features are thus features F1 and F2 (see point 8.5 above). For the reasons given for the fifth auxiliary request, claim 1 of the seventh auxiliary request does not fulfil the requirements of Article 56 EPC.
Conclusion
13. Since none of the requests admitted into the appeal proceedings is allowable, the appeal is to be dismissed.
Order
For these reasons it is decided that:
The appeal is dismissed.