Verifiable Homomorphic Encrypted Computations for Cloud Computing

Cloud computing is becoming an essential part of computing, especially for enterprises. As the need for cloud computing increases, the need for cloud data privacy, confidentially, and integrity are also becoming essential. Among potential solutions, homomorphic encryption can provide the needed privacy and confidentiality. Unlike traditional cryptosystem, homomorphic encryption allows computation delegation to the cloud provider while the data is in its encrypted form. Unfortunately, the solution is still lacking in data integrity. While on the cloud, there is a possibility that valid homomorphically encrypted data beings swapped with other valid homomorphically encrypted data. This paper proposes a verification scheme based on the modular residue to validate homomorphic encryption computation over integer finite field to be used in cloud computing so that data confidentiality, privacy, and data integrity can be enforced during an outsourced computation. The performance of the proposed scheme varied based on the underlying cryptosystems used. However, based on the tested cryptosystems, the scheme has 1.5% storage overhead and a computational overhead that can be configured to work below 1%. Such overhead is an acceptable trade-off for verifying cloud computation which is highly needed in cloud computing. Keywords—Cloud computing; computation verification; data confidentiality; data integrity; data privacy; distributed processing; homomorphic encryption


I. INTRODUCTION
The demanding needs of modern computing have prompted many enterprises to outsource their data solution to cloud service providers (CSP). CSP provides services that improve performance efficiency and ease of maintenance to the adopters. On top of the improvements, adopting cloud services also offers savings in information technology infrastructure costs, in which most of the infrastructure cost is transferred to the CSPs, and clients only pay for what are used. Thus, enterprises can conveniently store, maintain, and manage data files remotely with reduced operation costs [1]. The CSP market is currently brimming with CSPs and their innovative and competitive products. In general, the current success of cloud services is mostly on cloud storage. However, the market for cloud computing is also building up in momentum. Implementing cloud computing empowers enterprises to become more competitive by having computing platforms that are scalable, agile, and reliable. As a result, the growth in the cloud computing market is projected to reach US 623.3 billion by 2023 [2].
A typical cloud computing ecosystem consists of cloud users (client), CSP, and the network infrastructure that connects the client and the CSP. Models of CSP architecture consist of software as a service, infrastructure as a service, and platform as a service. In addition, there are a few CSP designs that include private, public, hybrid, and community clouds [3].
Even though the outlook for cloud computing is positive, this technology is always being plagued with the trade-offs between cost and security. The issue lies in the principle of cloud computing, where enterprises need to delegate the task of protecting their data to CSP [4]. Subsequently, data sovereignty is lost once the data is stored in a remote CSP. This absence of control for data security presents data protection problems. According to the Cloud Security Alliance (CSA) analysis, for the third time in a row [5], [6], [7], data breaches topped the list of threats in the cloud. Data is considered breached once its information is disclosed, manipulated, or used by unauthorized parties. A data breach may be the primary goal of a targeted attack or merely the result of human error. However, the management of CSP is central, and it cannot guarantee the reliability of its employees [8]. [9] found that the occurrence of internal breaches is more serious and costly than foreign attacks. The reason behind this result is that insiders know the system and attack valuable information, while outsiders steal what they have access to [10].
As part of the security risk assessment, data privacy, confidentiality, and integrity must be considered to mitigate potential risks. Privacy refers to the access control that the clients have over their data. Confidentiality means only authorized parties can access the data. In comparison, data integrity refers to the assurance of data consistency over its entire lifecycle [11].
In order to ensure privacy and confidentiality in cloud computing, researchers have indicated that homomorphic encryption (HE) is one of the promising methods for remote manipulations over encrypted data [12], [13]. However, although HE makes computation delegation possible, it has security flaws that can affect the validity of outsourced calculations. Specifically, HE is malleable in nature, which makes it noncompliance to the indistinguishability under adaptive chosenciphertext attack (IND-CCA2) security notation [14]. Therefore, data integrity is at stake with only HE itself versus centralized cloud data management. For cloud computing, the threats to data integrity can be numerous and varied. This paper addresses the problem of data integrity verification (DIV) of CSP computations over homomorphically encrypted data. The focus of this paper is on the CSP behavior that stores and computes sensitive data. Specifically, the threats from the CSP can be enumerated as follows: 1) An attacker (CSP) violates the data integrity by directly substituting the given ciphertext with another valid ciphertext. 2) An attacker (CSP) maliciously substitutes a given computation query with another valid query.
Integer-based HE has been extensively researched and used. Therefore, this research aims to achieve the computational integrity of HE over an integer finite field. That contributes to strengthening the security of HE against data tampering and thus achieving privacy, confidentiality and integrity of the processed data.
The rest of the paper is organized as follows. Section II presents the current DIV methods with their limitations. Section III illustrates the candidate HE cryptosystems and presents the proposal scheme. The results and discussion are shown in Section IV. Finally, Section V provides the conclusion.

II. RELATED WORK
Researchers are still looking for a comprehensive security solution that can bring cloud computing to the mainstream. HE with different approaches have been utilized to address the DIV problem. Classical auditing methods on single data copy had a broad resonance in addressing this problem, which represented by provable data possession (PDP) techniques [15], [16], [17], [18], [19], [20] and proving the possibility of retrieval (POR) techniques [21], [22], [23], [24], [25], [26], [27]. However, these methods are ineffective in the case of data loss or corruption on the servers. Another alternative is to archive multiple replicas of each file to use if the original copy has compromised, this model is known as data integrity auditing under distributed such as [28], [29], [30], [31].
As the previous schemes by their nature allow for a limited number of queries, there are proposing solutions [32], [33], [34], [35] that assign auditing tasks to a single third-party auditor (TPA) that independently manages the data audits. There are also works [36], [37], [38] where the auditing task is assigned to multiple TPAs to benefit from simultaneous synchronous audit sessions. Nevertheless, all the aforementioned schemes focus on checking the data integrity stored in cloud computing servers without verifying the validity and efficiency of CSP computations over the data.
Otherwise, [46] presented a verifiable scheme that implements a commitment utilizing probabilistically checkable proofs. At the same time, [44] extended the scope of verifiable computation in two essential directions: public delegation and public verifiability. While [47] used a quadratic arithmetic program and Elliptic curve encryption to obtain public verification commitment with a constant size however the number of executed operations. Also, [40] suggested a verifiable method of computations of quadratic polynomials over a large number of variables. Meanwhile, [45] tried to solve possible collusion attacks in the El-gamal scheme by re-encrypting the ciphertext using the receiver's public key. After that, [41] present a general Incremental verifiable database system by integrating the primitive vector commitment and the encryptedthen-incremental MAC encryption mode. And [48] suggested a framework using a hash function over ciphertext and dual-CSPs to check data duplication. [43] scheme promised an improved deduplication system in a hybrid cloud architecture. Furthermore [58] introduced the IKGSR scheme to improve the RSA key generation function based on the use of four giant prime numbers to generate encryption and decryption keys. In short, all of these proposals have the same framework idea of bounding the CSP generate a commitment, and accordingly, the client used this commitment to verify the CSP performance over his ciphertexts.
Whereas [50] proposed a public evaluation verification scheme over ciphertexts by interacting with the trusted authenticator (TA) and a public auditor proxy (PAP). Although they reduce the overload of both the cloud users and the verifier, it inefficient for practical applications due to using FHE's complicated scheme.
While numerous attempts have been made to overcome the CSP's computation cheating attack problems, they remain subject to certain fundamental flaws. First of all, most of the previous works are constructed for particular structures and cannot be included in other environments. This means that even a minor alteration can cause the schemes to fail due to their particular layouts. Furthermore, all the proposed models believe in the centralization of the CSP's authority or any third party over the data. As CSP can manipulate the computations applied to the data, it can generate the commitment value to match the applied computations. Thus, the client still receives an adequate commitment to computed ciphertexts, while the CSP perform the computation fraud attack.
In different ways, some researchers, such as [11] and [4], sought to use blockchain technology to prove the work of cloud service providers on cloud data. [11] relied on fiat cryptocurrencies such as Bitcoin and Ethereum to store the hash of the database issued by at least four cloud service providers and compare all the issued results. [4] used the proof of work consensus to delay the creation of a new record in the database to 6 minutes to create a single record as a minimum.
Despite the security effectiveness of the proposed schemes, they are pretty expensive; i.e. in addition to the cost of the required computations, blockchain implementation costs will be added as additional costs that clients will have to pay. Moreover, adopting the Byzantines Fault Tolerance consensus for both proposals would at least quadruple both costs. Also, using proof of work consensus in scheme [4] will impact cloud computing business performance. Therefore, our proposal will be based on modular arithmetic to provide a verification mechanism for the processes applied to the data at the lowest costs. Furthermore, the use of modular arithmetic dramatically increases digital signal processing performance in algorithms with extensive use of addition and multiplication. Thus, it provides speed and low energy consumption and promises high reliability and fault tolerance [59], [60], [61]. The analysis of the latest scientific papers [62], [63], [64], [65], [66] confirms that the use of modular computation is continuously expanding. They ascribe that to the modular arithmetic's ability to increase the reliability of monitoring systems and their tolerance of errors significantly by increasing the resources used while preserving the operating time. As a result, many major companies such as Cisco and Kabushiki Kaisha Toshiba are rushing to research and apply modular arithmetic [67].

III. VERIFICATION SCHEME DESIGN
The migration of sensitive data into CSP is a source of security issues. If sensitive data are migrated into CSP, the client must be assured that proper data security measurements are in place. In order to ensure data privacy and confidentiality, this paper assumes the use of HE. Subsequently, this section presents the proposed scheme which enables the client to verify the integrity of the applied computations over the encrypted data. Fig. 1 shows the flow diagram of the proposed scheme.

A. Scheme Preliminaries
This paper proposes the use of HE over Z * p in encrypting the data before sending them to CSP, thus allows CSP to perform operations on the encrypted data at the client's request, without disclosing its content. In the context of this paper, the HE is briefly defined as follows. An HE over operation '⋄' in a finite field Z * p is an encryption scheme that supports the following equation: where Enc(.) is an encryption algorithm, k e is the encryption key and (m 1 , m 2 ) are plaintexts. An HE scheme is primarily characterized by four operations: KeyGen, Enc, Dec, and Eval. Eval is an HE-specific operation, which takes ciphertexts as input and outputs a ciphertext corresponding to the plaintexts [68]. The Eval function in this paper supports both addition and multiplication operations over Z * p . Table I summarizes the math notations used in this article. Depending on the supported homomorphism features, HE schemes can perform different type of operations. At any given time, the Partial Homomorphic Encryption (PHE) scheme can only perform one type of computation operation. It can be either a multiplicative homomorphism; e.g. RSA [69], and ElGamal [70], or additive homomorphism; e.g. Benaloh [71], Paillier [72], and Okamoto-Uchiyama (OU) [73]. While Somewhat Homomorphic Encryption (SWHE) scheme is a cryptosystem which supports both properties but for limited number of operations. Such as Boneh-Goh-Nissim (BGN) [74] which allowing unlimited number of additions, but only one multiplication. In this paper six HE schemes over Z p are benchmarks against the propose scheme. The following subsections introduce the six cryptosystems.
1) RSA Cryptosystem: RSA is a block cipher algorithm over integer finite field which support evaluation function for only homomorphic multiplication computations over ciphertext [69]. In RSA, the plaintext and the ciphertext (which are represented as positive integers) are bounded by n, where n is defined as n < 2 4096 for practical purposes. Following are the four main operations governing the RSA multiplicative-PHE cryptosystem: • KeyGen: The public key P k = {e, n} and the private key P rk = {d, n} are built upon two large prime numbers p, q such that p ̸ = q, and n = p × q. The integer e is randomly selected such that gcd(ϕ(n), e) = 1, 1 < e < ϕ(n) and d ≡ e −1 mod ϕ(n), where ϕ(n) = (p − 1)(q − 1).
• Dec: The ciphertext c can be decrypted by using private key P rk = {d, n} as shown in Equation (3).
2) ElGamal Cryptosystem: ElGamal proposed a probabilistic cryptography scheme based on public key cryptosystem in 1985 [70]. The scheme is based on Diffie-Hellman key exchange. The security of the scheme is based on the security of the discrete logarithm problem. A simple ElGamal's scheme is as follows: • KeyGen: Key generation process required a cyclic group G with order n using generator g. (h = g y ) is calculated based on a randomly chosen y ∈ Z * n . The public key and the private keys are defined as P k = {G, n, g, h} and P rk = {y, n}, respectively. www.ijacsa.thesai.org • Enc: The encryption of plaintext m requires a random integer r to be selected and kept hidden. The result of encrypting plaintext m is a ciphertext pair c = (c 1 , c 2 ) which is defined as follows: • Dec: The decryption is performed by using the private key {y, n} to compute s = c 1 y , followed by the decryption process itself as shown in the following equation: • Eval: ElGamal cryptosystem satisfies multiplicative homomorphism as shown in Equation (7).
3) Benaloh Cryptosystem: Benaloh scheme is based on the Goldwasser-Micali (GM) public key cryptosystem [71]. Benaloh scheme enhances the GM scheme by encrypting in blocks of bits rather than encrypting bit by bit. Security assumption of Benaloh scheme is based on the higher residuosity problem which is the generalization of quadratic residuosity problems (x 2 ). Following is the description of the Benaloh additive-PHE cryptosystem: • KeyGen: For a given block size r, two large primes p and q are selected such that gcd(r, (p − 1)/r) = 1 and gcd(r, (q − 1)) = 1. Subsequently n and ϕ(n) are calculated as n = pq and ϕ(n) = (p − 1)(q − 1), respectively. y ∈ Z * n is selected such that (y ϕ r ) ≡ 1 mod n, where Z * n is the multiplicative subgroup of integers modulo n which includes all the numbers smaller than r and relatively prime to r. The public key is published as (y, n), while (p, q) represents the private key.
• Enc: To encrypt a plaintext m ∈ Z r , where Z r = {0, 1, ..., r − 1}, a random u ∈ Z * r is selected. The encryption equation is as shown below: • Dec: The decryption process is done through an exhaustive search for i ∈ Z r , in which the plaintext m can be recovered by using Equation (9).
Enc(m 1 ) × Enc(m 2 ) = ((y m1 u 1 r ) mod n)) × ((y m2 u 2 r ) mod n), 4) Okamoto-Uchiyama Cryptosystem: [73] proposed a new additive cryptosystem which improve the computational performance by defining n = p 2 q within the same domain of Z * n . The security assumption of OU cryptosystem is based on the p-subgroup that makes it equivalent to the factorization of n. Following is the OU cryptosystem: • KeyGen: After determining the value of n, a random number g ∈ {2, . . . , n − 1} is selected such that g p−1 ̸ ≡ 1 mod p 2 . Subsequently h can be calculated as h = g n mod n. The public key and the private key are {n, g, h} and {p, q}, respectively.
• Dec: To recover the plaintext, the private key P rk = {p, q} is used with Equation (12).

5) Paillier Cryptosystem:
Paillier cryptosystem is a probabilistic public key cryptosystem based on higher-order residual classes which support only additive homomorphism computations [72]. Following are the four main operations governing the Paillier additive-PHE cryptosystem: • KeyGen: Paillier cryptosystem has a set of keys. p ∈ Z * n , q ∈ Z * n , g ∈ Z * n 2 are randomly selected such that gcd L(g λ mod n 2 ), n = 1, where p, q are two large primes, n = p × q and functions L and λ are defined as follows: • Enc: The encryption process utilizes the public key P k = {n, g} to encrypt an arbitrary plaintext m ∈ Z * n with a randomly selected integer r ∈ Z * n to produce ciphertext c. c = Enc P k (m) = g m r n mod n 2 .
• Dec: The decryption process uses the private key P rk = λ in the decrypting process as shown by Equation (17).
6) Boneh-Goh-Nissim Cryptosystem: BGN defined a Paillier-like cryptosystem with an unlimited number of homomorphic additions and a single multiplication on the plaintext [74]. BGN cryptosystem is described as follows: • KeyGen: A two large prime numbers q and r are chosen to produce the value of n = qr and a positive integer T < q which is selected randomly. Subsequently, two multiplicative groups G, G 1 of order n that support a bilinear pairing e : (G × G) → G 1 are selected. Random generators g, u are chosen where g, u ∈ G, and h = uq where h is a generator of the subgroup of order p. The public key is composed of P k = {n, g, h, G, G 1 , e}, and the private key is P rk = {p, n}.
• Enc: For a plaintext m ∈ Z T a random r ∈ Z n is selected. The encryption process is as shown by Equation (19).
• Dec: Decrypting ciphertext c ∈ G by using private key P rk = {p, n} is shown in Equation (20). Message m can be recovered in time O( (T )) since the message is bounded by T .
• Eval: BGN satisfies unlimited additive homomorphism as shown in Equation (21) and a single multiplicative homomorphism as represented in Equation (22).
HE cryptosystem is malleable, and therefore it is not IND-CCA2 secured by design. Data integrity can still be compromised by CSP and can go undetected. For example, the CSP can implicitly substitute given ciphertext or the cumulative result with other valid ciphertext without the need to know the content of those substituted data. Different from confidentiality and privacy, once integrity is compromised there is no way to restore the original data. Therefore, data integrity needs to be enforced on such outsource computations.

B. Proposed Scheme
The verification scheme has three phases: environment setup, computation outsourcing to the CSP by the client, and computation validation of CSP's work by the client, in which the last two phases can be repeated as required (see Table II). The three phases are thoroughly discussed in the subsequent context.

Phase 1 (Setup):
The client initiates the initialization phase by defining system parameters. The propose scheme consists of two different number systems. The first number system is the finite field Z * p where the HE calculations take place. The second number system is an n-bit binary number system, where n is a positive integer such that 2 n is much smaller than the prime p. The HE encryption function takes as input a public key k e and message m of index i and produces a ciphertext c i as an output, as shown in Equation (23).
Subsequently, the client identifies a positive integer v < L as the secret value, where L is the largest integer allowed in the implemented system. v will also serves as the verification parameter.

Phase 3 (Validation):
To verify CSP's calculation(s), the client needs to assure that the value received from CSP, c r , is the result of the arithmetic expression outsourced earlier, <expr>.
The equality of a basic arithmetic expression can be validated by evaluating its modular residue. Let <expr> be the outsourced arithmetic expression send by the client to the CSP and let c r be the calculation result received by the client from CSP. Subsequently, the client can validate c r by comparing the modular residues of both c r and <expr>, as depicted by Equation (25).
To simplify the calculation of the right-hand-side of Equation (25), the expression, <expr>, is further expanded by using the following grammar based on modular arithmetic properties.
<expr> ::= <term> '+' <expr> | <term> <term> ::= <factor> '×' <term> | <factor> <factor> ::= '(' <expr> ')' mod v | <const> mod v <const> ::= integer (26) On the software side, the proposed verified scheme that is running at CSP was implemented using Numpy [75], a compiled library which is efficient in manipulating big integer calculations for Python. For the client implementation which does not require big integer calculations, a basic C++ compiler was used when implementing the proposed scheme. This paper further assumes the use of the two HE properties; PHE represented by multiplicative homomorphic (RSA, ElGamal) and additive homomorphic (Benaloh, OU, Paillier) and SWHE represented by the BGN's method. In the following subsections, we present the results of applying the verification scheme to the different ciphertext sizes generated from candidate cryptosystems. The purpose of these calculations is to assess the implementation costs and performance of the proposed scheme for all candidate cryptosystems.

A. Storage Analysis
For the storage analysis, it is important to analyzes the storage requirement to store the residues of all the encrypted data at the client side against the actual encrypted data stored at the CSP. Storing the residues at the client side is an overhead to the proposed verified scheme, which does not exist in a normal HE implementation. To gauge the CSP's storage requirement, a few assumptions are made. Among the assumptions is the modulus size. NIST recommends 2048bit as the minimum size for the factoring modulus. While for a more secured applications, factoring modulus of at least 3072-bit is recommended [79]. To simplify calculation while adopting highest modulus value, this paper assumes 4096-bit as the factoring modulus.
Another assumption is the machine word-size. Current CPUs typically operate on 32-or 64-bit data, stacked of 4bits or 8-bits based on ISO/IEC 2382:2015 standard [80]. The storage requirement for the verified scheme depends on the size of the verifying parameter, v. In order to simulate primitive encryption, we assume the client operates on 64-bit data which is typical in most modern desktop computers. Thus, the storage requirement at the client side is less than 1.5% of the CSP full storage. To put this result into perspective, for a client who owns petabytes of data stored at the CSP, this scheme will require the client to store only terabytes of the corresponding data at the client side which is feasible on current modern desktop.

B. Performance Analysis
It is crucial to analyze the calculation overhead of the proposed verification scheme, that is, it is important to gauge the acceptable number of calculations per verification in order to reduce the calculation overhead in the proposed scheme. The analysis in this section is therefore qualitative in nature and based on how the proposed scheme works in terms of homomorphic operations. In general, the computations performed by the client's machine is slightly faster than the computations performed by the CSP when processing single outsource expression (one expression, one verification). However, a series of expressions (many expressions per verification) can reduce the calculation overhead extremely.
In case of multiplicative-PHE, on verifying one RSA multiplication calculation the client is required to perform one 64-bit multiplication and two 64-bit modular operations on the residues of the two corresponding ciphertexts, while the CSP homomorphically performs one big integer multiplication (4096-bit) operation on the two ciphertexts. Whereas ElGamal cryptosystem needs to double up the RSA computations for one multiplication operation because the nature of the scheme in producing ciphertext pair for each single plaintext. Table  IV shows the simulation results in multiplicative-PHE over x operations. Where Fig. 2 and Fig. 3 show the application of the proposed scheme to RSA and ElGamal cryptosystems, respectively. Both figures demonstrate the requested time variance between the client verification process and the CSP processing the data. Fig. 4 illustrates the corresponding overhead in processing multiplicative-PHE expressions per verification. Both encryption systems converge in the overhead percentage. In which a performing of one verification process per each computation process is less than 0.01%, but it quickly drops further to less than 1.00E −7 % in one verification process per 10000 computations.   The schematic for additive-PHE verification is similar to the multiplicative-PHE. For verifying x additive Benaloh or OU calculations, the client performs x multiplications and two modular operations. Whereas in verifying x additive Pailier calculations, the client performs x multiplications and two modular n 2 operations, as seen in Section III-A. On the other hand, the CSP homomorphically performs x big integer multiplications (4096-bit). Table V shows the CSP and client average execution time for the verified scheme in additive-PHE. Fig. 5, Fig. 6 and Fig. 7 display the results of  applying the verification scheme over the selected additive-PHE cryptosystems. Also, Fig. 8 shows the overhead results in processing additive-PHE expressions per verification for the selected cryptosystems. They are at rates less than 0.01% and rapidly decline to be approximately 1.00E −5 % in only 100 applied computations.
The verification for a BGN calculation consists of verifying additive homomorphic and multiplicative homomorphic at the same time. For x BGN computation, the verification process at the client side involves x 64-bit addition, one 64bit multiplication and two 64-bit binary operations on the residues of the two respective ciphertexts, while the CSP homomorphically conducts x big integer addition and one   big integer multiplication (4096-bit) operations on the two ciphertexts. Simulation result shown in Table VI and Fig. 9. Where Fig. 10 indicates that the verification overhead which is performed by the client is about 2.9% of the computation time needed by the CSP to process the real BGN calculation, moreover, it shows the overhead at the client side decreasing fast, as the number of calculations per verification increases.

C. Security Analysis
The proposed verification mechanism enhances the security of HE against data tampering. That is, HE cryptosystems with this mechanism are able to provide data integrity, not only confidentiality and privacy. Now the client can infer if any data breach occurred from substitution in ciphertext or change in the query process. In which the verification result does not match the results sent to the client.

D. Discussion
In general, the overhead of the propose scheme does not varied too much against the homomorphic cryptosystems. This is because all the mentioned cryptosystems are based on the integer finite field, in which both the multiplicative-PHE and additive-PHE are being designed by manipulating only the multiplication operation on the cihpertexts. Across the board, the overhead is high since the propose scheme is verifying by invoking calculations within the integer finite field. However, as shown in the previous section, amortization plays an important role in reducing the overhead, that is, one verification calculation is used to verify a batch of outsourced calculations. This is attributed to the increase in the execution time discrepancy between the CSP and the client; that is, the increase in the number of multiplications that CSP applies to the encrypted data against the execution time for verification which changes very slightly on the client machine.
It is also important to note that the increase in the ciphertext size due to the different cryptosystems and the size of the finite field do affect the overall performance of the propose scheme. It is also important to note that the propose scheme is less efficient on BGN with SWHE feature. This is due, partly to the high cost of exponential operation that was used to represent a single multiplication operation.

V. CONCLUSION
This paper addresses the problem of DIV of outsource computation. In the context of outsourcing computation to CSP, HE over Z * p does provide data confidentiality but lacks in data integrity. This paper presents an efficient DIV scheme for HE over Z * p by evaluating the modular residue of the outsource calculation. The propose scheme is flexible and extensible in design, in which the number of modulus that can be used is limited only by the word size of the client's machine. With a 64-bit machine, there are technically 2 64 possible modules. Subsequently, base on a 64-bit machine the storage requirement on the client's machine is less than 1.5% of the data size stored at the CSP. Across different cryptosystems tested, the worst computational overhead performed by the client is less than 3% of the actual homomorphic calculation perform by the CSP, that is, if one verification is applied to one homomorphic calculation. The worst computational overhead reduces to less than 0.1%, if one verification is performed for every 10 homomorphic calculations. It is also worth noting that the cryptosystems tested are slightly varied in their performances. In general, the proposed verified scheme can be implemented on any homomorphic cryptosystem that operates over the integer finite field Z * p without much restriction. Although the scheme solves the problem of verifying the integrity of the data computations, it may constitute a burden on the client to provide storage and extra work to achieve the verification phase. Therefore, we aim to shift the verification process to decentralized fog nodes, which communicate through a consensus in future work.