T 0025/12 (Proxying data access commands/NETAPP) of 9.10.2015

European Case Law Identifier: ECLI:EP:BA:2015:T002512.20151009
Date of decision: 09 October 2015
Case number: T 0025/12
Application number: 05851937.2
IPC class: H04L 29/08
Language of proceedings: EN
Distribution: D
Download and more information:
Decision text in EN (PDF, 345 KB)
Documentation of the appeal procedure can be found in the Register
Bibliographic information is available in: EN
Versions: Unpublished
Title of application: Systems and method for proxying data access commands in a storage system cluster
Applicant name: NetApp, Inc.
Opponent name: -
Board: 3.5.03
Headnote: -
Relevant legal provisions:
European Patent Convention Art 56
European Patent Convention Art 84
European Patent Convention Art 123(2)
Keywords: Claims - main request
Claims - clarity (yes)
Amendments - main request
Amendments - added subject matter (no)
Inventive step re. D4 and D5 - main request
Inventive step re. D4 and D5 - (yes)
Catchwords:

-

Cited decisions:
-
Citing decisions:
-

Summary of Facts and Submissions

I. This appeal is against the decision of the examining division refusing European patent application No. 05851937.2, publication number EP 1 880 529 A, which was originally filed as international application PCT/US2005/042173 (publication number WO 2006/118610).

II. The reasons given for the refusal were, inter alia, that claims 1 and 15 did not comply with Articles 84 and 123(2) EPC and that the subject-matter of all claims did not involve an inventive step (Article 52(1) and 56 EPC) having regard to the disclosures of:

D4: US 2004/030668 A and

D5: EP 1 357 476 A.

III. In the statement of grounds of appeal, the appellant requested that the decision be set aside and that a patent be granted on the basis of claims 1 to 17 of a main request or, in the alternative, of a first or a second auxiliary request, all requests as filed with the statement of grounds of appeal.

Oral proceedings were conditionally requested.

IV. In a communication accompanying a summons to oral proceedings, without prejudice to its final decision, the board raised objections under Article 52(1) EPC in conjunction with Article 56 EPC (lack of inventive step) in respect of the subject-matter of claims 1, 14 and 15 of all requests, an objection under Article 123(2) EPC (added subject-matter) in respect of claims 1, 14 and 15 of all requests, and an objection under Article 84 EPC (lack of clarity) in respect of claims 1, 14 and 15 of the main request and the second auxiliary request.

V. In response to the summons, the appellant filed a substantive response dated 11 September 2015 together with new sets of claims 1 to 16 of first and second auxiliary requests.

VI. Oral proceedings were held on 9 October 2015.

The appellant requested that the decision under appeal be set aside and that a patent be granted on the basis of claims of the main request as filed during the oral proceedings or, in the alternative, by way of a first auxiliary request, on the basis of the main request as filed with the statement of grounds of appeal, or on the basis of the second auxiliary request as filed with the letter dated 11 September 2015.

At the end of the oral proceedings, after due deliberation, the chairman announced the board's decision.

VII. Claim 1 of the main request reads as follows:

"A method for proxying data access commands from a first storage system (200A) to a second storage system (200B) in a storage cluster (130), wherein the cluster comprises a first disk shelf (112) being accessible to the first storage system and a second disk shelf (114) being accessible to the second storage system, the first and second storage systems each having a respective communications link (106, 108) to a client (104), the cluster further having an interconnect (110) between the first and second storage systems, the method comprising:

during normal cluster operation, assigning the first storage system (200A) as the owner of the first disk shelf (112), such that the first storage system (200A) may receive, via its respective communications link, and service data access requests from the client for blocks contained on the first disk shelf (112) and assigning the second storage system (200B) as the owner of the second disk shelf (114), such that the second storage system (200B) may receive, via its respective communications link, and service data access requests from the client for blocks contained on the second disk shelf (114);

configuring the client to use a proxy port of the first storage system as an alternative network path for data access requests for blocks contained on the second disk shelf (114), whereby if connectivity is lost over the communications link (108) from the client to the second storage system, the client may continue to access data serviced by the second storage system by directing a data access command to the proxy port of the first storage system;

receiving from the client the data access command at the proxy port of the first storage system (200A), the data access command being directed to the second storage system (200B) and comprising a block-based identification including a worldwide port name and a logical unit number identifier;

generating on the first storage system (200A) a file-level data access request from the received data access command by mapping the block-based identification to a file handle of the second storage system (200B);

forwarding the file-level data access request comprising the file handle to the second storage system (200B) over said interconnect (110);

processing the file-level data access request at the second storage system (200B) by accessing the second disk shelf (114);

sending a file-level response from the second storage system (200B) to the first storage system (200A) over said interconnect (110); and

returning the data associated with the file-level response from the first storage system (200A) to the client, such that the first storage system (200A) serves as a proxy for the second storage system (200B)."

VIII. Claim 13 of the main request reads as follows:

"A system for proxying data access commands from a first storage system (200A) to a second storage system (200B) in a storage cluster (130), wherein the cluster comprises a first shelf (112) being accessible to the first storage system and a second disk shelf (114) being accessible to the second storage system, the first and second storage systems each having a respective communications link (106, 108) to a client (104), the cluster further having an interconnect (110) between the first and second storage systems, the system comprising:

means for, during normal cluster operation, assigning the first storage system (200A) as the owner of the first disk shelf (112) , such that the first storage system (200A) may receive, via its respective communications link, and service data access requests from the client for blocks contained on the first disk shelf (112), and for assigning the second storage system (200B) as the owner of the second disk shelf (114), such that the second storage system (200B) may receive, via its respective communications link, and service data access requests from the client for blocks contained on the second disk shelf (114);

means for configuring the client to use a proxy port of the first storage system as an alternative network path for data access requests for blocks contained on the second disk shelf (114), whereby if connectivity is lost over the communications link (108) from the client to the second storage system, the client may continue to access data serviced by the second storage system by directing a data command to the proxy port of the first storage system;

means for receiving from the client the data access command at the proxy port of the first storage system (200A), the data access command being directed to the second storage system (200B) and comprising a block-based identification including a worldwide port name and a logical unit number identifier;

means for generating on the first storage system (200A) a file-level data access request from the received data access command by mapping the block-based identification to a file handle of the second storage system (200B);

means for forwarding the file-level data access request comprising the file handle to the second storage system (200B) over said interconnect (110);

means for processing the file-level data access request at the second storage system (200B) by accessing the second disk shelf (114);

means for sending a file-level response from the second storage system (200B) to the first storage system (200A) over said interconnect (110); and

means for returning the data associated with the file-level response from the first storage system (200A) to the client, such that the first storage system (200A) serves as a proxy for the second storage system (200B)."

IX. In view of the board's decision in respect of the main request, the claims of the first and second auxiliary requests need not be reproduced here.

Reasons for the Decision

1. Main request - amendments - Article 123(2) EPC

1.1 As a preliminary comment, the board notes that, in the description, the "local storage appliance" and "partner storage appliance" correspond to the first and second storage systems, respectively (page 10, lines 27 and 28 and page 6, lines 12 to 16 and 27 to 29, and claims 1 and 23).

1.2 Claim 1 is based on the application as originally filed as follows, the basis for the respective features being indicated in square brackets:

A method for proxying data access commands from a first storage system (200A) to a second storage system (200B) in a storage cluster (130) [claim 1], wherein the cluster comprises a first disk shelf (112) being accessible to the first storage system and a second disk shelf (114) being accessible to the second storage system, the first and second storage systems each having a respective communications link (106, 108) to a client (104) [Fig. 1, page 9, lines 13 and 14, in combination with page 18, lines 14 to 27], the cluster further having an interconnect (110) between the first and second storage systems [claim 8, Fig. 1], the method comprising:

during normal cluster operation, assigning the first storage system (200A) as the owner of the first disk shelf (112), such that the first storage system (200A) may receive, via its respective communications link, and service data access requests from the client for blocks contained on the first disk shelf (112) and assigning the second storage system (200B) as the owner of the second disk shelf (114), such that the second storage system (200B) may receive, via its respective communications link, and service data access requests from the client for blocks contained on the second disk shelf (114) [page 9, lines 20 to 26, and page 18, lines 14 to 16];

configuring the client to use a proxy port of the first storage system as an alternative network path for data access requests for blocks contained on the second disk shelf (114), whereby if connectivity is lost over the communications link (108) from the client to the second storage system, the client may continue to access data serviced by the second storage system by directing a data access command to the proxy port of the first storage system [page 19, lines 6 to 9, and page 21, lines 24 to 26];

receiving from the client the data access command at the proxy port of the first storage system (200A), the data access command being directed to the second storage system (200B) and comprising a block-based identification including a worldwide port name and a logical unit number identifier [claim 1, page 6, lines 12 to 14, and page 19, lines 6 to 9];

generating on the first storage system (200A) a file-level data access request from the received data access command [claim 1] by mapping the block-based identification to a file handle of the second storage system (200B) [page 6, lines 12 to 14, and page 19, lines 9 to 11];

forwarding the file-level data access request comprising the file handle to the second storage system (200B) over said interconnect (110) [claims 1 and 9 and page 6, lines 14 to 16];

processing the file-level data access request at the second storage system (200B) by accessing the second disk shelf (114) [claim 1 and page 6, lines 23 to 25];

sending a file-level response from the second storage system (200B) to the first storage system (200A) over said interconnect (110) [claim 1 and page 6, lines 25 to 27]; and

returning the data associated with the file-level response from the first storage system (200A) to the client [claim 10 and page 6, lines 25 to 27], such that the first storage system (200A) serves as a proxy for the second storage system (200B) [page 6, lines 27 to 29].

1.3 The board notes that the examining division held that the application as originally filed was silent about configuring a client following a loss of connectivity and referred to page 6, lines 10 and 11, page 21, lines 24 to 26 and page 29, lines 3 to 4, of the description. It concluded that the application as filed did not provide a basis for the feature "configuring a client to use a proxy port of the first storage system as an alternative network path for data access commands directed to the second storage system following a loss of connectivity from the client to the second storage system" (cf. point 2.1 of the reasons for the decision).

In present claim 1, this feature reads "configuring the client to use a proxy port of the first storage system as an alternative network path for data access requests for blocks contained on the second disk shelf (114), whereby if connectivity is lost over the communications link (108) from the client to the second storage system, the client may continue to access data serviced by the second storage system by directing a data access command to the proxy port of the first storage system".

On page 19, lines 6 to 9, it is stated that, "If connectivity is lost to the partner storage appliance, a client may continue to access data serviced by the partner storage appliance by directing data access requests to the proxy port of the local storage appliance" and on page 18, lines 22 to 24, it is stated that the proxy port may be utilized to proxy data access to the partner storage appliance. The partner storage appliance services access requests for blocks contained on the second disk shelf and Fig. 1 shows a communications link 108 between the client 104 and the second storage system 200B. This is summarized in the description on page 21, lines 24 to 26, as follows "Clients of the storage system cluster are configured to use the proxy port as an alternative network path to disks of the cluster". The board is thus satisfied that the application as filed provides a basis for the above-mentioned feature. Hence, the objection of the examining division has been overcome.

1.4 The feature at the beginning of claim 1 according to which the first disk shelf is accessible to the first storage system and the second disk shelf is accessible to the second storage system is based on page 9, lines 13 and 14, stating that each disk shelf is accessible to each storage system, providing redundant data paths in the event of failover. Further, at page 18, lines 24 to 27, an embodiment is described which does not have a standby port used for the failover event.

1.5 The above applies, mutatis mutandis, to the subject-matter of independent claim 13, which comprises constructional features corresponding to the method steps of claim 1.

1.6 The board therefore concludes that claims 1 and 13 of the main request meet the requirements of Article 123(2) EPC.

2. Main request - clarity - Article 84 EPC

The clarity objection raised by the examining division (cf. point 3 of the reasons) has been overcome by the amendment. Further, claims 1 and 13 do not give rise to any other objections under Article 84 EPC.

The board therefore concludes that claims 1 and 13 of the main request meet the requirements of Article 84 EPC.

3. Main request - inventive step - Articles 52(1) and 56 EPC

3.1 D5 discloses (cf. Fig. 1), using the language of claim 1, a method for proxying data access commands from a first storage system 10 to a second storage system 20 in a storage cluster (paragraph [0018], third and fifth sentences and paragraph [0053]), wherein the cluster comprises a first disk shelf 60 being accessible to the first storage system 10 and a second disk shelf being accessible to the second storage system (paragraphs [0041] and [0045], Fig. 1), the first storage system having a respective communications link to a client 30, the cluster further having an interconnect 51 between the first and second storage systems (paragraph [0037], first sentence, Fig. 1), the method comprising:

during normal cluster operation, assigning the first storage system as the owner of the first disk shelf, such that the first storage system may receive, via its respective communications link, and service data access requests from the client for blocks contained on the first disk shelf (paragraph [0018], third sentence, paragraph [0038], first sentence and paragraph [0041], first sentence) and assigning the second storage system as the owner of the second disk shelf, such that the second storage system may receive and service data access requests from the client for blocks contained on the second disk shelf (paragraph [0045]);

configuring the client to use a proxy port of the first storage system as a network path for data access requests for blocks contained on the second disk shelf (paragraph [0018], third and fifth sentence and paragraphs [0038] and [0053]), whereby the client may access data serviced by the second storage system by directing a data access command to the proxy port of the first storage system (paragraphs [0053] and [0055]);

receiving from the client the data access command at the first storage system, the data access command being directed to the second storage system (paragraph [0018], third sentence, paragraphs [0038], [0055] and [0057]) and comprising a block-based identification including a worldwide port name and a logical unit number identifier (paragraphs [0041] and [0049], Fig. 2);

generating on the first storage system a data access request from the received data access command (paragraph [0019], first sentence);

forwarding the data access request to the second storage system over said interconnect (paragraph [0018], fourth sentence and paragraphs [0065] and [0066]);

processing the data access request at the second storage system by accessing the second disk shelf (paragraph [0065], last sentence);

sending a response from the second storage system to the first storage system over said interconnect; and

returning the data associated with the response from the first storage system to the client, such that the first storage system serves as a proxy for the second storage system (paragraph [0018], fifth sentence and paragraph [0069]).

3.2 The subject-matter of claim 1 differs from the method disclosed in D5 at least in that according to claim 1 the second storage system has a respective communication link to the client and that, during normal cluster operation, the second storage system may receive, via its respective communications link, and service data access requests from the client for blocks contained on the second disk shelf.

3.3 Starting out from D5, the problem underlying the claimed method may thus be seen as increasing the accessibility of the storage system cluster.

3.4 D5 does not disclose or suggest the above-mentioned distinguishing feature. Further, D5 is silent about resolving a failure in a communications link by using an alternative route.

With respect to document D4, the board notes that the above-mentioned distinguishing feature is neither disclosed in nor suggested by D4. More specifically, D4 (see the abstract and paragraph [0001]) is concerned with storage systems supporting block access and file access protocols in a single system, and does not disclose a storage system cluster with multiple storage systems interconnected with each other (cf. Fig. 1).

3.5 The board notes that the examining division, at point 6.1 of the reasons of the decision, held that the feature that the second storage system has a connection to the client via which data access requests for the respective storage system may be transmitted was anticipated by D5. This finding was based on paragraph [0039] of the description of D5, according to which the method is applied to "a storage system operated in a form in which the second storage control apparatus 20 is connected to the host computer 30". However, paragraph [0039] further states that the method is applied to the storage system such that "the operation of a storage system has been changed such that the first storage control apparatus 10 is newly introduced as a succeeding apparatus of the second storage apparatus 20 to a storage system operated in a form in which the second storage control apparatus 20 is connected to the host computer 30, and the second storage control apparatus 20 functions as an apparatus for extending or supporting the functions of the first storage control apparatus 10." (underlining by the board). Thus, the operation of the second storage control apparatus does not continue unchanged, and at least some of its functions are shifted to the succeeding first storage control apparatus. Since D5 otherwise only discloses storage systems with a first connection from the host to the first storage control apparatus and a second connection from the first to the second storage control apparatus, it follows that, once the first storage control apparatus is introduced into the above-mentioned storage system, the connection between the second storage control apparatus and the host computer is removed and the communication between the host computer and the second storage control apparatus is only via the first storage control apparatus. Thus, D5 does not disclose a storage system with a first and a second storage control apparatus, in which each has a respective connection to the host computer, as is specified in claim 1 of the main request.

3.6 The above considerations apply, mutatis mutandis, to the subject-matter of claim 13.

3.7 The board therefore concludes that the subject-matter of claims 1 and 13 of the main request involves an inventive step when starting out from D5 and taking into account the teaching of D4 (Art. 52(1) and 56 EPC).

4. In view of the above, the decision under appeal is to be set aside.

5. Remittal

5.1 The board notes that in the decision under appeal, the examining division, in its reasoning concerning lack of inventive step, referred exclusively to D4 and D5. Hence, the question of whether or not the remaining documents used in the examination procedure, i.e. documents D1 to D3, possibly in combination with D4 and/or D5, are relevant to the question of inventive step is still to be examined. Further, in view of the amendments to the claims, it may be necessary for the description to be adapted.

These issues are considered best dealt with by the examining division.

5.2 In accordance with Article 111(1) EPC, the board therefore considers it appropriate to remit the case to the department of first instance for further prosecution.

5.3 The board notes that, in claim 13 as submitted at the oral proceedings, in the feature "the client may continue to access data serviced by the second storage system by directing a data command to the proxy port of the first storage system" at the end of the third paragraph, the term "data command" should read "data access command".

Order

For these reasons it is decided that:

1. The decision under appeal is set aside.

2. The case is remitted to the department of first instance for further prosecution on the basis of claims 1 and 13 of the main request as filed at the oral proceedings.

Quick Navigation