FOREX Prices Prediction Using Deep Neural Network and FNF

—One of the largest financial markets on the planet is the foreign exchange (FOREX) market. Banks, retail traders, businesses, and individuals trade more than $5.1 trillion in FOREX daily. It is very challenging to predict prices in advance due to the market's complex, volatile, and highly fluctuating nature. In this study, the new FOREX Normalization Function (FNF) is proposed and used with different models to predict the prices of the AUD/USD, EUR/USD, USD/JPY, CHF/INR, USD/CHF, AUD/JPY, USD/CAD, and GBP/USD. Two models are proposed in this study. The first model contains FNF as a normalization and feature extractor, followed by a Convolutional Neural Network (CNN). The second model utilizes FNF and a Support Vector Regressor(SVR). The forecasts are set for a one-day timeframe, with predictions made for 1, 3, 7, and 15 days ahead. The efficient ability of the proposed method to solve the FOREX prediction problem is proven by performing experiments on nine real-world datasets from different currencies. Additionally, the models are evaluated using Mean Absolute Error (MAE) and Mean Squared Error (MSE). Applying the presented models to 9 different datasets improved the results by an average between 0.5% and 58% of MAE.


I. INTRODUCTION
FOREX, also known as "foreign exchange" involves changing one currency into another.Every day, traders trade trillions of dollars [1]. Due to significant currency rate fluctuations, this market is unpredictable, complicated, and subject to frequent changes [2].The market is always open, although trading takes place in the four main time zones: European, Asian, Australian, and North American [3].The opening and closing hours of each of these zones differ.The market is protected from scammers since it takes significant money to impact exchange rates.Over the past few decades, scholars have become increasingly interested in forecasting foreign exchange.Unlike the stock market, the foreign exchange market doesn't require large amounts of cash.Leverage is one of the most critical tools related to the market.Leverage is the process of increasing the future return on investment by using borrowed funds [4].Traders, individuals, professionals making expensive purchases, entrepreneurs, and investors employ leverage.This approach is beneficial for those with little finances and is also a vital element of the FOREX market that attracts private and small investors.
Both technical analysis and fundamental analysis can be used to forecast FOREX prices.While technical analysis only uses historical time series data to make FOREX market predictions, the fundamental analysis considers various variables, including the company's and the nation's economic and industrial conditions [5].Algorithmic trading refers to trading in which automated programmed algorithms implement orders instead of human traders.Algorithmic trading is utilized by hedge funds, pension funds, and other financial institutions [6], [7].A lot of work has been put in by both academics and trading companies to find possible factors that could lead to much higher profits [8].Numerous studies have attempted to forecast the movement of the FOREX market.The most crucial decision in FOREX is predicting the direction of currency price movement.Accurately forecasting currency prices can yield several advantages for traders and vice versa.In recent years, the academic community has made a lot of effort to develop machine learning models for FOREX market prediction.
On the other hand, numerous verifiable study types have been undertaken to understand and anticipate currency patterns in the FOREX market using machine learning algorithms.Generally, many methods are categorized into three categories: machine learning models, deep learning models, and hybrid forms.
Machine learning algorithms include Random Forest, Support Vector Machine, XGBoost, etc. Deep learning-based methods demonstrate how advanced neural models can significantly enhance prediction results.Like statistical methods, these methods require knowledge of effective signals to be utilized as input.The researcher utilized deep learning methods such as RNN, LSTM, CNN, GRU, and Transformers [2].Long short-term memory (LSTM), a recurrent neural network (RNN), excels at modeling temporal patterns and is commonly employed in various tasks involving time series problems.CNNs are used to analyze price patterns by utilizing images of financial data as input [9].This study attempts to answer the following questions: What normalization method improves FOREX prediction accuracy?What is the percentage of improvement on different dataset results?Which model, when used with the FOREX normalization function, gives the best results?Therefore, the objectives of this study are to utilize the FNF method with various machine and deep learning models and to compare the proposed model with two baseline models.Then, show the percentage of error reduction made by FNF.The rest of this research is organized in the following manner: Section II reviews the related work, Section III provides an overview of the key scientific concepts, and Section IV describes the proposed models and their architectures, used datasets, evaluation metrics, and training configuration.Section V contains the results of the proposed models compared to baselines, discussion and ablation study.Finally, the paper concludes in Section VI.

II. RELATED WORK
Various methods have been used in previous years to forecast the FOREX market.Numerous approaches have been attempted, mostly based on Artificial Intelligence principles.Some methods contain just one processing technique, while others combine two or more techniques.Researchers have used a variety of linear and nonlinear models for FOREX forecasting.Naive models such as Exponential smoothing, Autoregressive Moving Average model (ARMA), and Autoregressive Conditional Heteroskedasticity models (ARCH) and their variants (GARCH, EGARCH, etc.) are some of the most often used strategies for modeling volatility in time series [10].Using machine learning methods like artificial neural networks (ANN) has been the subject of extensive study in recent years.The outstanding quality of ANNs used for time series forecasting problems is their innate capacity for nonlinear modeling without any assumption regarding the statistical distribution being invalid based on the observations.The most popular is the multi-layer perceptron (MLP), which has one hidden layer.Support vector machines (SVM), initially designed to address classification issues, are currently used for time series forecasting.Least-square SVM (LS-SVM) and Dynamic Least-square SVM are two common SVM models for forecasting time series [11].Some researchers have applied deep learning models to predict FOREX prices.Hee Kueh and Leonard have proposed a comprehensive intelligent system for automated FOREX trading.The algorithm utilizes an ensemble methodology to make decisions; each strategy preprocesses technical data in order to produce a distinct buy or sell signal.The ensemble model takes all the signals from the different strategies and uses majority voting logic to decide what to do next [12] .The problem with this system is its low accuracy, and it was tested only on the EUR/USD dataset.Pornwattanavichai and Maneeroj [13] proposed a cascading model for the FOREX market.They made forecasts using Fundamental Data and Technical indications based on BERT.They used the EUR/USD dataset from February 3, 2003, through February 28, 2020, which included 4,455 days.However, this system has limitations; it has not been tested on many datasets.Junior and Appiahene [14] developed a conceptual framework centered on a FOREX forecasting module that uses the Hurst test to determine whether a time series is predictable.They then applied a two-layer stacked LSTM architecture and correlation analysis to multiple currency datasets, including EUR/AUD, AUD/JPY, and AUD/USD.However, the problem with this framework is that it is not generalized to many currency pairs or window times.Dash S. and Sahu [15] employed a Deep Predictive Coding Network Optimized with a Reptile Search Algorithm for shortterm forecasting over three days to forecast exchange rates of the CHF/INR, USD/EUR, and AUD/JPY currency pairs.However, this system produces a high mean absolute error value and does not support long-horizon forecasting.
Salman and Saeed U proposed the FLF-LSTM model to predict EUR/USD prices.They enhanced prediction using a custom loss function named FLF with a single LSTM and different activation functions [16].Areej and Mohamed [17] used RBF, MLP, and SVM algorithms as classifiers to predict the direction of the price and compared them based on percentage classification performance.Ikhagvadorj and Tsendsuren [18] proposed a framework consisting of seven neural networks with different activation functions.The outputs of these neural networks are concatenated and then fed into the softmax layer to produce probabilities or importance weights for each neural network.This model uses extended Min-Max normalization for financial time series data.Haixu and Jiehui [19] used the Autoformer for long-term FOREX price prediction at different time steps (96,192,336,720).Autoformer is a variation of the transformer that uses Auto-Correlation instead of self-attention.Auto-correlation focuses on the connections of sub-series among underlying periods, while self-attention focuses on the connection between time points.
In conclusion, previous research indicates low results, and the impact of preprocessing has not been researched in detail.Also, the datasets used were not diverse and did not contain a large number of values for the time horizon.This paper addresses the impact of data preprocessing and scaling to enhance price prediction by using FNF.Nine datasets are used in this research, and the horizon values are 1, 3, 7, and 15.FNF is used in this paper with different models.The proposed models outperform the baselines.

III. BACKGROUND
In this section, the necessary context for introducing our method is presented.First, CNN, SVM, LSTM, and XGBoost are examined.

A. CNN
The Convolutional Neural Network (CNN) has demonstrated remarkable advances in several domains associated with pattern recognition and image processing throughout the previous decade.One of the primary advantages www.ijacsa.thesai.org of CNNs is their ability to effectively decrease the parameter count within Artificial Neural Networks (ANNs) [20].

1) Convolutional Neural Network Element:
To develop a comprehensive understanding of (CNNs), it is necessary to examine their fundamental components.The input layer receives input and transfers it to the convolution layer [21].The parameters of a convolutional neural network are arranged into an array of three-dimensional structural units called kernels or filters.Let us assume that the filter's dimensions in the qth layer are Fq × Fq × dq and height of layer q, width of layer q.The following equation defines the convolutional process from the q-th layer to the (q + 1)th layer [22].

=
(1) , The 3-dimensional tensor W(p,q) = represents the parameters of the pth filter in the qth layer.The indices i, j, and k represent the positions along the filter's height, width, and depth.The qth layer's feature maps are represented by the three-dimensional tensor H(q) = [22] The pooling layer will then perform downsampling along the provided input's spatial dimension.Researchers use either max pooling or average pooling [23].The fully-connected layer will perform the same functions observed in conventional artificial neural networks.It is also recommended that the Rectified Linear Unit (ReLU) activation function be employed between these layers to enhance performance [23].The output layer generates the final prediction value [21].A common CNN is shown in Fig. 1 [24].

B. SVM
Support Vector Machines (SVM)can be used for prediction when the outcome is binary, multinomial, or continuous.Classification is a common term for regression with Bernoulli outcomes in statistical learning.Multinomial regression is also known as multiclass classification.The procedure is called regression when the results are continuous [25].

1) Support Vector Regression (SVR):
In SVR, theinsensitive loss function is minimized.If the loss is smaller than , then the loss equals zero.The following equation ( 2) is utilized, known as the simple linear loss function: In support vector machines, kernels can achieve nonlinear regressions, such as radial basis function (RBF) and polynomial kernel [26].The SVR can also be used with a linear kernel.The linear kernel equation is presented in (3), but nonlinear kernels are more flexible.
RBF, a radial basis function, is the most common option for a nonlinear kernel.Equation 4presents it [25].
where γ > 0 is an additional parameter for adaptability.When a test observation is quite far from a training observation, the exponent becomes strongly negative, and K(xi, xi′) reaches zero.Sometimes, more variables, such as polynomials, must be added as a function of the original variables.Polynomial variables expand the number of regression variables.The following equation shows the polynomial equation [25] [26].
Where γ > 0 and β0 are additional parameters for adaptability, β0 "biases" the similarity metric for all samples.Applying this kernel implies adding polynomial powers of the x variables [25].

C. LSTM
The Long Short-Term Memory (LSTM) is an architectural design of recurrent neural networks (RNNs) that aims to provide a more precise characterization of temporal sequences and their long-range dependencies in comparison to traditional RNNs [27].In this section, LSTM architectures will be explored.

1) LSTM architectures:
The Long Short-Term Memory (LSTM) model has specialized memory blocks within its recurrent hidden layer.The memory blocks in the network consist of memory cells that possess self-connections, enabling them to retain the temporal state of the network.These memory cells are accompanied by specialized multiplicative units known as gates, which regulate the information flow within the network.In the original architecture, each memory block comprised an input and output gate.The input gate regulates the influx of input activations into the memory cell, while the output gate regulates the transmission of cell activations from the current cell to the remaining components of the network [28] [29].Subsequently, the forget gate was incorporated into the www.ijacsa.thesai.orgmemory block.Furthermore, the current Long Short-Term Memory architecture incorporates peephole connections that link the internal cells to the gates inside the same cell.This design allows the LSTM to acquire accurate output timing information [30].
An LSTM network calculates a mapping from an input sequence x = ( , ..., ) to an output sequence y = ( , ..., ) by calculating the network unit activations using the following equations iteratively from t = 1 to T: = σ( The weight matrices W terms denote weight matrices, are represent diagonal weight matrices corresponding to peephole connections.σ represents the logistic sigmoid function, while b represents bias vectors (bi represents the input gate bias vector).The input gate, forget gate, output gate, and cell activation vectors i, f, o, and c are identical in magnitude to the cell output activation vector m.⊙ represents the element-wise product of the vectors g and h, where g and h represent the cell input and cell output activation functions, respectively [31].

D. XGBoost
XGBoost, an abbreviation for extreme gradient boosting, is well recognized as a common, robust, and efficient implementation of gradient boosting [32].XGBoost is an ensemble model that efficiently implements decision trees to create a composite model with superior prediction performance compared to individual techniques employed alone.The output of XGBoost is calculated using the following equation: where ̂ i is the generated tree, is the newly created tree model, and T is the total number of tree models [33].

IV. RESEARCH METHODOLOGY
In this section, two models are proposed: FNF-CNN and FNF-SVR.The first model uses FNF to normalize data and extract new features; those features become input for CNN.The second model also uses FNF and then a Support Vector Machine.Finally, four different kernels are used.The architecture of the models is explained in this section.
A. Proposed Approach 1) Model I: FNF-CNN: Multiple models and preprocessing methods are used in this paper to enhance results and present the impact of FNF.For example, the Moving Average and normalization of the close price are calculated using the following equation:  A deep learning framework is used with extended Min-Max normalization [18] as the first baseline model.The raw data consists of open, high, low, and close prices from the previous time step, and the target is to predict close prices for the next day.The second baseline model is the Deep Predictive Coding Network Optimized with Reptile Search Algorithm (RSA-DPCN) [15].RSA-DPCN is used to predict the future price of currency pairs for short-term time frames, such as three, seven, and 15 days ahead of the closing price of EURUSD, AUDJPY, and CHFINR [15].Fig. 3 shows the proposed model to predict three days ahead of the closing price.Fig. 3.This model is similar to the previous model in Fig. 2, but it is used to predict the price for the next three days.

2) Model II: FNF-SVR:
The Moving Average from Windows 1 to 21 is calculated for all features in the second model.Then, all moving average features are scaled and fed to the Support Vector Regressor (SVR), as shown in Fig. 4. Different kernels are used to obtain the best results.www.ijacsa.thesai.org

B. Dataset
All models in this paper have been applied to the following datasets: The datasets in group 1 contain daily prices of (GBP/USD, EUR/USD, USD/CHF, USD/JPY, AUD/USD, USD/CAD) from 2000 to 2019.The datasets are partitioned into three parts: training (80%), validation (20%), and testing (the last 365 days) [18].The dataset group 2 contains daily prices of AUD/JPY, CHF/INR, and EUR/USD from 2015 to 2020 [15].

C. Evaluation Metrics
Mean Absolute Error (MAE) is used to assess the model's performance and is expressed by Eq. ( 14) [34].
Here, denotes the actual value of the price at period t, Denotes the forecasted price value at period t, and T denotes the sample size.In other words, the MAE is the mean of the absolute difference between the predicted and actual prices throughout the test set.The actual price and the predicted price differ significantly when the MAE is high [34].
Mean Squared Error (MSE) is another method used to evaluate model performance.MSE is the average squared difference between the actual currency price and the values predicted by the model.Eq. ( 15) presents MSE [16].

D. Training Configuration
In the first group of datasets, when the FNF-CNN model is used, the time windows range from 1 to 21 days.This range is used because increasing it above 21 leads to increased preprocessing without improving results and reducing the number below 22 reduces the accuracy of the results.The inputs to the first layer are (15x88), with nine previous time steps and 88 features.The number of previous steps was chosen as 9 because increasing it leads to increasing prediction error.The number of filters is 32, the kernel size is 4, and the next layer is flattened these hyperparaemeters chosen after many trails .These parameters were chosen after experimenting with the number of filters: 64 and 128.The experiments showed that using 32 filters leads to better results.Filter sizes of 8 and 16 were also tested, but they did not affect the improvement of the results.The last two layers are dense, and ReLU is used as the activation function..The FNF-LSTM model is a combination of FNF to extract features and LSTM to predict closing prices, and Adam is used as an optimizer.The learning rate and epochs of FNF-CNN and FNF-LSTM are 0.001 and 250, respectively.When we use a large number of epochs, overfitting occurs.Therefore, the appropriate value was equal to 250.The number of layers used is 5 and 3 in FNF-CNN and FNF-LSTM, respectively.The FNF-XGBoost model, which includes FNF and XGBoost Regressor, is used with 80 estimators and max depth=70, and the results look promising.All of these hyperparameters were selected based on many trials.

A. Results
This section shows the results compared to the results of baseline one and baseline two.FNF-SVR-RBF refers to the FNF-Support Vector Regressor model with a radial basis function.The FNF-SVR-p2 and p3 refer to the FNF-Support Vector Regression model with a polynomial degree 2 and 3 kernels, respectively.Table I shows the MAE results of baseline 1 compared to the proposed models.Table II is the same as Table I but for MSE.
Using FNF generates 84 features, and this variety of features affects results by reducing error.CNN supports efficient feature learning.This resulted in the proposed models performing better than the baseline 1.Additionally, it is worth noting that the SVR model with FNF yielded the best results among all the models in the tables.Fig. 5 shows the price compared to the FNF-SVR, FNF-CNN, and FNF-LSTM prediction for EUR/USD and USD/JPY from dataset group 1.These graphs show that the model has learned the market trend and predicts the prices based on the actual trend.| P a g e www.ijacsa.thesai.org In the previous results, using FNF with SVR and CNN outperformed baseline 1 in most datasets.Fig. 6 and 7 show the percentage reduction in MAE for each model compared to baseline 1.The preprocessing and feature extraction stages are the main reasons for error reduction.
Table III shows the results for the AUD/JPY dataset of baseline 2 compared to the proposed models.Table IV   The following section shows the percentage of error reduction by many models compared to baseline2.There is a significant reduction in errors in the AUD/JPY and CHF/INR datasets.The improvement is observed for the 3, 7, and 15 horizons, as shown in Fig. 8.However, using the proposed models on the EUR/USD dataset does not enhance the results because the strength of trend and seasonality in the EUR/USD time series is low.The results appear to vary between different datasets because each dataset has different statistical properties, and the strength of trend and seasonality differs from one time series to another.

B. Discussion and Ablation Study
This section presents an ablation study for the proposed models, FNF-SVR and FNF-CNN, and compares their results with and without FNF.Additionally, this section includes an ablation study applied to the FNF-LSTM model.

1) FNF-CNN:
This study examines the impact of using FNF with CNN and the effect of each layer in CNN.The following tables demonstrate the impact of each layer in CNN.In the first baseline, the objective is to predict the next closing price of the next day.The USD/JPY Dataset from group 1, which was used in the first is shown in Table VI.Table VI illustrates the impact of removing specific layers and FNF on the USD/JPY dataset from group 1. Numbers in the header of the table refer to different model element combinations.The Conv1D refers to the one-dimensional convolution layer, and MaxPooling1D refers to the onedimensional max pooling layer.The results of the proposed model outperform all other compared components of the models.Tables VII and VIII present the results of USD/CHF and AUD/USD datasets.In most datasets of the first baseline, the FNF-CNN model outperforms other candidate CNN components.If we look at the results of the second column (FNF-CNN) and fifth column ( model element combinations 3), which shows the results of the model with FNF and without it, the impact of scaling and feature extraction enhances results in most datasets.Using FNF with the CNN model improved the results by 12.3% and 26.0% in terms of MAE for the datasets USD/JPY and AUD/USD, respectively.These tables also confirm that the layers used in FNF-CNN are the ones that generally produce the best results on different datasets.The second baseline target predicts the closing price of 3, 7, and 15 days ahead.Tables IX, X, and XI show the results of the CNN ablation study on the CHF/INR dataset from group 2. The target is to predict the closing price for the next 3, 7, and 15 days.The results show that using FNF improves performance compared to not using it because FNF generates many features that enhance prediction accuracy.Additionally, these tables validate that the FNF-CNN layers are those that typically yield the best results across a variety of horizons.Using FNF with the CNN model improved the results by 51.1%, 16.0%, and 17.0% MAE for the datasets CHF/INR when horizons equal 3, 7, and 15, respectively.www.ijacsa.thesai.org2) Impact of Kernels and FNF on the SVR model: This section presents the impact of FNF with different SVR kernels.The USD/CHF and AUD/USD datasets from group 1 are used to predict the next day's closing price.The FNF results outperform those without it, as shown in Fig. 9.In the second baseline, the target is to predict the closing price in the next 3, 7, and 15 days; the MultiOutputRegressor from the Keras library was used to do this.
To discuss the previous results in detail, we will explain the percentage of improvement in the results for each time series separately.Applying FNF-SVR on the USD/JPY dataset from group 1 enhances the results by 0.2%, 0.7%, 0.5%, and 2.6% of MAE when using RBF, Poly degree 3, Poly degree 2, and linear kernels, respectively.Using FNF and SVR on the AUD/USD dataset from group 1 enhances the results by 4.2%, 76.1%, and 32.2% of MAE when using RBF, Poly degree 3, and Poly degree 2 kernels, respectively.When we apply the SVR-FNF model on the CHF/INR dataset from group 2 to predict the next three days, the results enhance by 6.7%, 99.7%, and 29.5% of MAE when using RBF, Poly degree 3, and Poly degree 2 kernels, respectively.When we applied the previous experiment on the same dataset but with a horizon equal to 7 days, the results enhanced by 86.7%, 99.7%, and 52.0% of MAE.However, when the horizon is equal to 15 days, the results enhanced by 84.3%, 99.8%, and 34.6%, respectively.
3) FNF-LSTM: In this section, the FNF-LSTM ablation study is presented.The following Datasets, USD/JPY, USD/CHF, and AUD/USD from group 1, are used and displayed in Fig. 10.This figure shows MAE is reduced when FNF is used.Fig. 11 shows the results of the same experiment on the CHF/INR and AUD/JPY datasets from group 2. The results demonstrate that the use of FNF enhances prediction in most datasets.www.ijacsa.thesai.orgwww.ijacsa.thesai.org Because every dataset has unique statistical characteristics, and each time series has a variable strength of trend and seasonality, the results appear to differ between them.LSTM can learn temporal dependencies, but when we use FNF-LSTM on the USD/JPY and USD/CHF datasets from group 1, the results are enhanced by 17.8% and 9.5%, respectively, while the results of AUD/USD are not enhanced.Applying FNF-LSTM on the CHF/INR dataset from group 2 to predict the next 3, 7, and 15 days enhances results by 57.8%, 28.7%, and 24.6% of MAE, respectively.Applying FNF-LSTM on the AU/DJPY dataset from group 2 to predict the next 3, 7, and 15 days enhances results by 58.9%, 10.6%, and 24.4% of MAE, respectively.

VI. CONCLUSION
This paper proposed a FOREX normalization function used as a preprocessing method.This function is used with a machine learning model (SVR) and deep learning model(CNN) to enhance FOREX price prediction.Moving Averages and Scaling on raw data are essential steps to minimize error.Nine FOREX datasets are used in different horizons (1, 3, 7, and 15 days).Mean Absolute Error and Mean Squared Error are used to evaluate all models.The best performance results come from FNF-SVR and FNF-CNN.In this research, we compare different models with FNF and without it.The comparison between the proposed models and the two baseline models shows that our proposed models outperform the baseline models.The development of these proposed models is still in its early stages.Since the models present an exciting and potentially successful research topic, many enhancements must be investigated.The importance of this study is that it reduced the prediction error, which researchers can use this study to build decision support systems used in automated trading.Traders can also use its results to help make decisions to buy and sell currencies.
The limitation of this study is that we did not test the presented models on long-term prediction and did not test them on more datasets.In future work, the same proposed models will be used with different activation functions to find enhanced activation functions for FOREX and train the model using more datasets with varying time frames.

FNFc
represents the normalized close price, and MAw is the Moving Average of window size w.The first proposed model calculates the FNF equation for open, high, low, and close.The time windows range from 1 to 21 days.Then, all moving average features are normalized (FNF) and fed to a 1-dimensional convolutional neural network (CNN).MSE is the loss function, and ReLU is the activation function, as shown in Fig. 2.

Fig. 2 .
Fig. 2. The architecture of FNF-CNN to predict one day ahead.

Fig. 4 .
Fig. 4. Architecture of FNF-SVR.XGBoost and LSTM with FNF are used to compare the results of different machine and deep learning models.For all models, MSE is used as a loss function.TensorFlow 2, Keras, Pandas, and XGBoost frameworks are used in the experiments.

Fig. 8 .
Fig. 8.The MAE reduction percentage of each model, compared to baseline 2, was applied to the AUD/JPY and CHF/INR datasets.

Fig. 9 .
Fig. 9. (a) Impact of FNF and SVR kernels applied on USDJ/PY, USD/CHF, and AUD/USD datasets from group 1 at the top and middle left of this figure.(b) The last three charts show the effect of FNF and Kernels on the CHF/INR dataset from Group 2 to predict the next 3, 7, and 15 days.

Fig. 10 .
Fig. 10.Impact of FNN with LSTM applied to the USD/JPY, USD/CHF, and AUD/USD datasets from Group 1 to predict the next-day closing price.

TABLE I
and Table V are the same as Table III but for the CHF/INR and EUR/USD datasets.The 84 features extracted using FNF also enhance the result of multistep prediction, like 3, 7, and 15 days.From the results, the FNF-SVR and FNF-CNN models outperform baseline two on the AUD/JPY and CHF/INR datasets.Based on CHF/INR and AUD/JPY datasets, we find that the results of the FNF-SVR model are close to the results of the FNF-CNN model.FNF-LSTM also outperforms baseline 2 in most datasets because LSTM learns temporal dependency.

TABLE VI .
FNF-CNN ABLATION STUDY APPLIED ON USD/JPY FROM GROUP 1 DATASET

TABLE VIII .
FNF-CNN ABLATION STUDY APPLIED ON AUD/USD FROM GROUP 1 DATASET

TABLE IX .
FNF-CNN ABLATION STUDY APPLIED ON CHF/INR FROM GROUP 2 DATASETS TOPREDICT 3 DAYS

TABLE XI .
FNF-CNN ABLATION STUDY APPLIED ON CHF/INR FROM GROUP 2 DATASETS TO PREDICT 15 DAYS