T 2412/13 (Managing images and geographic location in mobile device/HTC) of 27.2.2018

European Case Law Identifier: ECLI:EP:BA:2018:T241213.20180227
Date of decision: 27 February 2018
Case number: T 2412/13
Application number: 11179398.0
IPC class: G06F 17/30
Language of proceedings: EN
Distribution: D
Download and more information:
Decision text in EN (PDF, 402 KB)
Documentation of the appeal procedure can be found in the Register
Bibliographic information is available in: EN
Versions: Unpublished
Title of application: Method and system for managing images and geographic location data in a mobile communication device
Applicant name: HTC Corporation
Opponent name: -
Board: 3.5.07
Headnote: -
Relevant legal provisions:
European Patent Convention Art 56
Keywords: Inventive step - (no)
Catchwords:

-

Cited decisions:
-
Citing decisions:
-

Summary of Facts and Submissions

I. The present European patent application No. 11179398.0 was filed as a divisional of European patent application No. 09162786.9. The appeal lies from the decision of the Examining Division to refuse the present application for lack of inventive step, Articles 52(1) and 56 EPC, of the subject-matter of the independent claims of the sole request then on file over prior-art document D1 in combination with document D4 and the standard practice of the skilled person:

D1: US 2005/0041015 A1, published on 24 February 2005;

D4: US 2008/0146274 A1, published on 19 June 2008.

In the contested decision the Examining Division further expressed the view that the subject-matter of the dependent claims did not seem to be inventive either.

II. In the statement of grounds of appeal, the appellant requested that the decision be set aside and that a patent be granted on the basis of amended claims 1 to 12 submitted with the grounds of appeal. The appellant requested "oral proceedings only via video conference" if the independent claims were not deemed to be allowable.

III. In a communication accompanying a summons to oral proceedings, the Board identified some points of discussion with regard to added subject-matter under Articles 123(2) and 76(1) EPC and expressed the preliminary opinion that the subject-matter of claim 1 did not involve an inventive step over document D1 in combination with the standard practice of the skilled person. Some features distinguishing the claimed mobile communication device from that of document D1 were also known from document D4 and some seemed to lack technical character. The Board informed the appellant that its request that oral proceedings be held by video conference had to be refused in view of the fact that the "general framework" that would be required as a prerequisite for holding oral proceedings by video conference before a board of appeal was currently not in place.

IV. Following the letter dated 24 November 2017, in which the appellant requested that the oral proceedings be held as a video conference or, alternatively, in the afternoon, the Board rescheduled the oral proceedings for 14.00 hours (on 27 February 2018).

V. With a letter of reply the appellant submitted further arguments in favour of inventive step. The sole request on file was maintained.

VI. Oral proceedings were held on 27 February 2018. At the end of the oral proceedings, the chairman pronounced the Board's decision.

VII. The appellant's final request was that the decision under appeal be set aside and that a patent be granted on the basis of claims 1 to 12 of the sole request filed with the statements of grounds of appeal.

VIII. Claim 1 of the sole request reads as follows:

"A mobile communication device (100) for managing images and associated geographic location data in a mobile device and displaying the images and the associated geographic location data, the apparatus comprising:

a housing (101) having a form factor suitable for handheld use;

a display device (110, 314, 512) contained in the housing;

an input device (102, 104, 109, 312, 514) contained in the housing; wherein the input device includes a touch-sensitive screen;

at least one wireless network connection component (316, 508) contained in the housing and configured to connect to a mobile wireless network;

a location component contained (510) in the housing and configured to generate geographic location data based on a current geographic location of the mobile device;

a camera component (108, 506) contained in the housing configured to generate an image;

a memory (304) contained in the housing; and

a processor (302) contained in the housing and coupled among the display device, the input device, the at least one wireless network connection component, the location component, the camera component, and the memory;

wherein the processor is configured to execute various components, wherein the components comprise: a location association component (516) configured to receive the image from the camera component and geographic location data based on the current geographic location of the mobile device from the location component and to store an association between the image and the geographic location data in the memory; a map display component (522) configured to cause the display device to display a geographic indicator of the received image on a map at the associated geographical location, a supplemental information component (518) configured to associate supplemental information with the received image, wherein the supplemental information includes description information and at least one of telephone number, category, street address, network address, or audio data; wherein the description information is automatically determined and generated based on the current geographic location of the mobile device and wherein the street address is automatically determined based on the geographic location using mapping software or mapping server providing Reverse Geocoding service; wherein the telephone number, category, network address and audio data are received from the input device (102, 104, 109, 312, 514); and wherein an interface is displayed in response to a new image received from the camera (108) to enable a user to provide new data or edit the data for the supplemental information; an association display component (520) configured to cause the display device to display the images for selection and the description information corresponding to each of the images in a first user interface and to display the selected image, the associated geographic information data corresponding to the selected image, and the associated supplemental information corresponding to the selected image in a second user interface; and wherein the first user interface comprises a navigation means configured to enable the user to navigate among the images in the first user interface."

IX. The appellant's arguments relevant to this decision are discussed in detail below.

Reasons for the Decision

1. The appeal complies with the provisions referred to in Rule 101 EPC and is therefore admissible.

2. The invention

2.1 The invention concerns managing images and associated geographic location data in a mobile communication device with a camera. When an image is generated, the system determines a geographic location associated with the image, e.g. using a built-in GPS receiver, and stores the image and the associated geographic location data. The system may also associate supplemental information with the image (see paragraphs [0016] and [0046] to [0047] and Figure 6A of the A1 publication).

2.2 The application describes in paragraphs [0049] to [0054] with reference to Figure 6B the process for associating supplemental information with a received image. In order to ensure that an appropriate identifier is assigned to the image, the system either receives description information from the user or automatically generates the description information by using other provided information, such as the current geographic location of the mobile device. It then obtains data with regard to the other fields of supplemental information such as telephone number, category, street address, network location, audio data, user opinion rating, and accuracy index. Most of the fields are provided by the user. The street address may be automatically determined based on the geographic location. The accuracy index is usually generated automatically by the system taking into account the method used to determine the location. For example, if the location was provided by a GPS receiver a higher accuracy index may be provided than if the location was determined based on proximity to cellular network transceivers. At the end, the supplemental information is indexed and stored.

2.3 The system supports a mapping mode in which the location associated with a selected image or locations associated with multiple images stored by the system are displayed using geographic indicators on a displayed map of the geographical area (paragraph [0036]).

3. Claim 1

3.1 Claim 1 defines a mobile communication device for managing images and associated geographic location data and displaying the images and the associated geographic location data. It first defines the hardware components of the mobile communication device (part A) and then functional components executed by the processor (part B).

3.2 The first part of the claim specifies the following features itemised by the Board:

(A) the apparatus comprising:

(A.1) a housing having a form factor suitable for handheld use;

(A.2) a display device contained in the housing;

(A.3) an input device contained in the housing;

(A.3.1) wherein the input device includes a touch-sensitive screen;

(A.4) at least one wireless network connection component contained in the housing and

(A.4.1) configured to connect to a mobile wireless network;

(A.5) a location component contained in the housing and configured to generate geographic location data based on a current geographic location of the mobile device;

(A.6) a camera component contained in the housing configured to generate an image;

(A.7) a memory contained in the housing; and

(A.8) a processor contained in the housing and coupled among the display device, the input device, the at least one wireless network connection component, the location component, the camera component, and the memory.

The second part of claim 1 specifies that (itemisation by the Board)

(B) the processor is configured to execute various components, wherein the components comprise:(B.1) a location association component configured to receive the image from the camera component and geographic location data based on the current geographic location of the mobile device from the location component and to store an association between the image and the geographic location data in the memory;(B.2) a map display component configured to cause the display device to display a geographic indicator of the received image on a map at the associated geographical location,(B.3) a supplemental information component configured to associate supplemental information with the received image, (B.3.1) wherein the supplemental information includes description information and at least one of telephone number, category, street address, network address, or audio data; (B.3.2) wherein the description information is automatically determined and generated based on the current geographic location of the mobile device and (B.3.3) wherein the street address is automatically determined based on the geographic location using mapping software or a mapping server providing a reverse geocoding service; (B.3.4) wherein the telephone number, category, network address and audio data are received from the input device; and (B.3.5) wherein an interface is displayed in response to a new image received from the camera to enable a user to provide new data or edit the data for the supplemental information;(B.4) an association display component configured to cause the display device to (B.4.1) display the images for selection and the description information corresponding to each of the images in a first user interface and (B.4.2) display the selected image, the associated geographic information data corresponding to the selected image, and the associated supplemental information corresponding to the selected image in a second user interface; and (B.4.3) wherein the first user interface comprises a navigation means configured to enable the user to navigate among the images in the first user interface.

The subject-matter of claim 1 is similar to that of the refused claim 1. Concretely, claim 1 differs from the refused one in that feature B.3.1 specifies the description information as a non-optional part of the supplemental information, feature B.3.4 was added and feature B.3.5 refers to "data" instead of "values".

4. Inventive step - claim 1

4.1 In the decision under appeal, document D1 was the starting point for the inventive-step assessment. That document discloses a wireless telephone which extracts image files, creates icons of the images and displays the icons in a layout on the display. The layout may be based on date information or on positional information corresponding to the images. In the latter case, the layout may include map data (see abstract, and claims 1 and 4 to 8 of document D1).

Prior-art document D1 therefore concerns "a mobile communication device for managing images and associated geographic location data [...] and displaying the images and the associated geographic location data", like present claim 1, and is an adequate starting point for assessing inventive step. This has not been contested by the appellant.

4.2 Features A

The Board agrees with the Examining Division that the wireless telephone of document D1 includes a housing, a display device, at least one wireless network connection, a location component, a camera, a memory and a processor as defined in part A of claim 1 and an input device including a cross key and a decision key (see paragraphs [0068] to [0073] and Figures 4A, 4B, 5 and 6). As argued in the contested decision, the mobile communication device of document D1 thus comprises all the features of part A of claim 1 with the exception of feature A.3.1. That has not been contested by the appellant.

4.3 Features B.1

The mobile device of document D1 stores address book data including name, telephone number, mail address and the like, as well as images and associated information. Information regarding the image files is stored in the data folder management table and includes positional information (paragraph [0075], Figure 9). When a user chooses to fetch an image from the camera and to store it, the system automatically generates a file name based on the date and a number, obtains the current positional information (latitude and longitude) and stores the image file. It also stores the image's associated information, including the current positional information, in the data folder management table (paragraphs [0077] to [0082], Figures 8 and 9). The device of document D1 therefore executes a location association component according to feature B.1.

4.4 Features B.3

For each image, the data folder management table stores a file name, a file preparation date, a positional information flag, positional information, sub-folder name 1, folder flag and sub-folder name 2. The three last fields are used to associate to an image a date folder and a place name folder, e.g. "in and around Shibuya" (paragraph [0082], Figure 9). The information stored in the data folder management table, the positional information and the place name folder (name) correspond, respectively, to the supplemental information, geographic location data and description information of claim 1. Document D1 therefore also discloses feature B.3.

With regard to feature B.3.1, the information stored in Table 1242, corresponding to the supplemental information, thus includes the folder place name, or sub-folder name 2, which corresponds to the description information. Since "category" is not further defined in the claim, one of the other fields of the data folder management table, e.g. a flag field, can be seen as indicating a category. The other items of supplemental information are specified in feature B.3.1 only as alternatives. Feature B.3.1 is therefore not new over document D1.

The appellant argued that document D1 failed to disclose automatically determining the description information as recited in feature B.3.2. The description information of claim 1 described the information of the current geographic location, for example, California Academy of Science 454 illustrated in Figure 4G, whereas D1 merely named the folder as "in and around" a large area (e.g. Tokyo, Ikebukuro, or Shinjuku). The place name of document D1 was not similar to the street address of the present application.

The Board notes however that the claim does not specify the degree of geographical accuracy of the description information, which is not the same as the street address, and that in document D1 both the positional information and the place name are obtained automatically (paragraphs [0080] and [0081]). Document D1 therefore also discloses feature B.3.2. The Board nevertheless recognises that document D1 does not disclose calculating a street address as defined in feature B.3.3.

In its letter of reply the appellant argued that in D1 the information in the data folder management table 1242 of Figure 9 was obtained when the image file was retrieved, and that "the file preparation day is obtained from the device which retrieves the file, and the positional information is obtained from the positioning device disposed in the device which retrieves the file". Document D1 did not disclose "generating supplemental information (not obtained directly from the device itself) according to the positional information" and could hence not achieve the corresponding advantages of the invention of generating and/or determining the supplemental information in real time.

The Board cannot identify such a distinction with respect to features B.3 to B.3.2. In the device of document D1, when an image is received from the camera after being fetched by the user, the system may automatically obtain the positional information and place name and store the image information in the data folder management table 1242 (see paragraphs [0081] and [0082]). However, the Board concedes that document D1 does not disclose that an interface is displayed in response to a new image being received from the camera to enable a user to enter or edit data as defined in feature B.3.5.

4.5 Features B.2 and B.4

Document D1 discloses in paragraphs [0085] to [0097], with reference to Figure 10, how the mobile device displays image file information. It supports two main display states, one based on the date folder information and one on positional information.

In the first state, the system displays icons and respective file names corresponding to the image files "in order of the date folders" (paragraph [0087], Figure 11A).

In the second state, folder icons and image file icons with respective place name folder names are "displayed around a self-position 1036 on the main display unit" (paragraph [0089], Figure 11B). If a user selects a file or folder icon and the mobile device is connected to the network, the mobile device sends the current positional information and the positional information of the selected icon to an external map information service to obtain map information. It then displays on a map used as background the selected file icon, the self position and the folder icon in the corresponding positions on the map, and a route from the current position to the selected icon (paragraphs [0090] to [0093], Figure 11C).

In the Board's opinion, the display in the second state of the icons in positions relative to the current position, as shown in Figure 11B of document D1, corresponds to the display of images for selection in a first user interface according to features B.4 and

B.4.1. The display of the selected image on the map as shown in Figure 11C corresponds to the second user interface of features B.4.2.

As explained in paragraph [0090], the user may select an icon on the display of Figure 11B by using a cross key and a decision key (Figure 4A, cross key 106, decision key 107). The user interface of document D1 therefore also includes navigation means as defined in feature B.4.3.

Since some of the image file icons shown in the display of Figures 11B and 11C correspond to received images, feature B.2 is also disclosed in document D1.

The appellant argued that the device of document D1 did not display all the information of table 1242. With reference to Figures 11B and 11D, document D1 did not disclose displaying any other information that was associated with the location of the image in addition to the description information 1034. The Board notes that the "description information 1034", i.e. the place name, and the location of the icon in the display of Figure 11C provide "associated geographic information data" as specified in feature B.4.2. The Board concedes however that the user interface of Figure 11C of D1 does not display the associated supplemental information corresponding to the selected image as defined in feature B.4.2.

The appellant also argued that in the method of document D1 the user first selected the folder to be opened and then images corresponding to the folder were displayed. In the Board's view, that is true for the transition between the user interfaces of Figures 11B and 11C, but that is covered by the claim. Furthermore, the system of document D1 does not display all the images in the user interface of Figure 11B, only those with a positional information (see paragraphs [0088] and [0089] and steps (A203) to (A206) of Figure 10). On the other hand, the claim does not specify that all images stored in the system are displayed in the first user interface.

4.6 From the above analysis, the Board concludes that the mobile communication device of claim 1 differs from that of document D1 in that

(A.3.1) the input device includes a touch-sensitive screen;

(B.3.3) the street address is automatically determined based on the geographic location using mapping software or a mapping server providing a reverse geocoding service;

(B.3.4) the telephone number, category, network address and audio data are received from the input device;

(B.3.5) an interface is displayed in response to a new image received from the camera to enable a user to provide new data or edit the data for the supplemental information;

(B.4.2)' in step (B.4.2) the associated supplemental information corresponding to the selected image is also displayed in the second user interface.

4.7 The appellant argued that there was a synergistic effect between feature A.3.1 and the other distinguishing features. A portion of the supplemental information could be provided by the user through the touch-sensitive input device.

The Board is however of the opinion that none of the other distinguishing features are specifically designed for a touch-sensitive input device. In the context of graphical user interfaces (GUIs) the design decision that data should be input by the user after a particular event is independent of the input device available. The Board therefore agrees with the decision under appeal that there is no synergistic effect between feature A.3.1 and the other features, and that A.3.1 is merely an alternative solution to provide an input device. It would be an ordinary option for the skilled person to use a touch-sensitive input device, which at the date of priority of the present application was a known solution for mobile devices (see e.g. document D4, paragraph [0032]).

4.8 Distinguishing feature B.4.2' relates to presentation of information as such and therefore lacks technical character. Such a feature does not contribute to inventive step.

4.9 The other distinguishing features B.3.3 to B.3.5 characterise the user interface designed to associate supplemental information such as street address, telephone number or audio data with the received image.

At the oral proceedings the appellant argued that the distinguishing features contributed to the improvement of data quality in the mobile device. The location information was based on GPS information or triangulation based on the device's proximity to cellular network base stations. The user could immediately get the supplemental information, check it and improve the data by correcting the street address. This "feedback loop" increased the accuracy of the information to the street address level, which was very advantageous for navigation purposes. The invention served to overcome inaccuracies resulting from the GPS system or from the triangulation. In the device of document D1 this was not possible because the user would not be able to correct latitude and longitude.

The Board could not find in the application any description of this effect of a "feedback loop" for improving quality of data with regard to the street address. Paragraphs [0032] to [0034] and [0052] cited by the appellant at the oral proceedings disclose that the supplemental information may be edited and that the street address may be provided by the user, but do not describe the effect mentioned by the appellant. Furthermore, such an effect cannot be established with regard to the claimed invention. The claim does not specify that the user actually changes or corrects the address. Besides, a street address entered by the user cannot be seen as a technical feature in the present context.

In the context of the claimed invention, the street address, telephone number, category, network address and audio data associated with the image do not contribute to solve a technical problem over document D1. Those elements therefore do not contribute to the technical character of the invention and can be included in the formulation of the problem to be solved.

Distinguishing features B.3.3 to B.3.5 therefore are considered to solve the problem of finding a way to associate street address, telephone number, category, network address and audio data with the received image in the device of document D1.

4.9.1 Starting from document D1, the skilled person would consider the commonly known general options for obtaining desired data of determining the data automatically and of obtaining the data from the user. It would also be an obvious option to prompt the user in response to a new image being received. At the date of priority of the present application it was common practice to prompt the user to input data in response to an event. Furthermore, a similar feature is disclosed in document D4, paragraphs [0043] to [0046]. Features B.3.4 and B.3.5 are thus not inventive.

4.9.2 Since in the device of document D1 the location, e.g. "in and around Shibuya", is obtained automatically (see paragraph [0081]), the skilled person would consider determining the street address automatically.

In its letter of reply the appellant argued that the place name of document D1 was not similar to the street address of the present application. The place name could be easily obtained by comparing the position information (e.g. the co-ordinates in Fig. 9 of D1) with co-ordinates of known places (e.g. Shibuya or Ikebukuro), but the comparison result could only give a rough and coarse description like "in and around Shibuya". As a result, the user did not know the exact location where the file (i.e. image) was obtained. In the present application, a reverse geocoding service was further adopted to transform the geographic location into an exact street address which could be used to assist the user in finding the exact location where the file was generated.

The Board does not find the appellant's arguments persuasive. Reverse geocoding is the process of (reverse) coding a point location (latitude, longitude) to a readable address or place name. This can be achieved in the way described in paragraph [0081] of document D1 for the place name, by comparing the position information (or point location, see Figure 9) with co-ordinates of known places/street addresses. In document D1, the position is thus transformed into a place name by mapping software which provides reverse geocoding. It would be obvious for the skilled person to use the same technique to obtain street addresses.

In addition to that, document D1 discloses using a mapping service from a mapping server to obtain place names (see paragraphs [0064] to [0067], especially the last sentence of paragraph [0067], and paragraphs [0091] and [0093]). Document D1 therefore also discloses using a mapping server as in feature B.3.3. As the Board explained at the oral proceedings, the mapping server of D1 includes more complex and exact mapping data (paragraphs [0064] to [0067]), including road and route data (paragraph [0066]). It would therefore be obvious for the skilled person, if an exact street address was to be obtained, to modify the apparatus of document D1 to use other known services of reverse geocoding providing that type of information.

The skilled person would therefore arrive at feature B.3.3 without exercising inventive skills.

4.9.3 It follows from the above that the skilled person, using her ordinary skills to solve the problem mentioned above, would consider automatically determining the street address, obtaining data from the user and displaying an interface in response to a new image being received in the way described in features B.3.3 to B.3.5.

4.10 In the light of the foregoing, the subject-matter of claim 1 does not involve an inventive step within the meaning of Article 56 EPC.

Conclusion

5. Since the sole request on file is not allowable, the appeal is to be dismissed.

Order

For these reasons it is decided that:

The appeal is dismissed.

Quick Navigation