Skip to content

How to Use PyTorch's GPU Acceleration for Stock Market Predictions

Hello everyone! Today, we're going to explore how you can leverage GPU acceleration in PyTorch to enhance the performance of your applications, specifically focusing on a stock market prediction example. PyTorch is a powerful machine learning library that supports seamless GPU acceleration, allowing for faster computations and the handling of larger datasets with ease.

Why Use GPU Acceleration?

Before diving into the how-to, let's briefly understand why you might want to use GPU acceleration. GPU, or Graphics Processing Unit, is designed to handle multiple operations simultaneously. This makes it particularly suited for the parallel processing requirements of machine learning and deep learning algorithms, leading to significantly faster processing times compared to CPU (Central Processing Unit) processing.

Setting Up PyTorch with GPU

First, ensure you have a GPU-enabled setup. If you're using platforms like Google Colab, you can easily switch to a GPU runtime via the settings. For local setups, you will need an NVIDIA GPU and the appropriate CUDA (Compute Unified Device Architecture) version installed.

PyTorch and CUDA

PyTorch uses CUDA to manage computations on the GPU. The beauty of PyTorch is that it allows you to write your code once and then run it on either CPU or GPU without much change. Here's how you can get started:

  1. Check for GPU Availability: Before you start, check if PyTorch can detect the GPU:

    import torch
    
    if torch.cuda.is_available():
        print("GPU is available!")
        device = torch.device("cuda")
    else:
        print("GPU not available, using CPU instead.")
        device = torch.device("cpu")
    
  2. Send Your Model to GPU: If a GPU is available, you can send your model to the GPU:

    model = MyModel()  # Replace MyModel with your actual model class
    model.to(device)
    
  3. Send Data to GPU: Similarly, you need to make sure your data is on the GPU when you are training your model:

    inputs, labels = data[0].to(device), data[1].to(device)
    

Real-World Example: Predicting Stock Prices

Let's apply what we've learned to a simple stock price prediction model. We'll use an LSTM (Long Short-Term Memory) network, a type of recurrent neural network suitable for sequence prediction problems.

Step 1: Preparing the Data

First, we need to load and preprocess our stock market data:

import numpy as np
import pandas as pd
import torch
import torch.nn as nn
from sklearn.preprocessing import MinMaxScaler

# Load data
data = pd.read_csv('stock_prices.csv')  # make sure to have a CSV file with stock prices data

# Preprocess data
scaler = MinMaxScaler(feature_range=(-1, 1))
price_data = scaler.fit_transform(data['Close'].values.reshape(-1, 1))

# Convert data to PyTorch tensors
price_data = torch.FloatTensor(price_data).view(-1)

# Create in/out sequence data
def create_inout_sequences(input_data, tw):
    inout_seq = []
    L = len(input_data)
    for i in range(L-tw):
        train_seq = input_data[i:i+tw]
        train_label = input_data[i+tw:i+tw+1]
        inout_seq.append((train_seq, train_label))
    return inout_seq

seq_length = 10  # Number of days to look back
sequences = create_inout_sequences(price_data, seq_length)

Step 2: Defining the LSTM Model

class LSTM(nn.Module):
    def __init__(self, input_size=1, hidden_layer_size=100, output_size=1):
        super(LSTM, self).__init__()
        self.hidden_layer_size = hidden_layer_size
        self.lstm = nn.LSTM(input_size, hidden_layer_size)
        self.linear = nn.Linear(hidden_layer_size, output_size)
        self.hidden_cell = (torch.zeros(1, 1, self.hidden_layer_size),
                            torch.zeros(1, 1, self.hidden_layer_size))

    def forward(self, input_seq):
        lstm_out, self.hidden_cell = self.lstm(input_seq.view(len(input_seq), 1, -1), self.hidden_cell)
        predictions = self.linear(lstm_out.view(len(input_seq), -1))
        return predictions[-1]

Step 3: Training the Model

model = LSTM()
model.to(device)

loss_function = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)

epochs = 150
for i in range(epochs):
    for seq, labels in sequences:
        optimizer.zero_grad()
        model.hidden_cell = (torch.zeros(1, 1, model.hidden_layer_size).to(device),
                             torch.zeros(1, 1, model.hidden_layer_size).to(device))

        seq, labels = seq.to(device), labels.to(device)

        y_pred = model(seq)

        single_loss = loss_function(y_pred, labels)
        single_loss.backward()
        optimizer.step()

    if i%25 == 0:
        print(f'epoch: {i:3} loss: {single_loss.item():10.8f}')

Conclusion

By moving our computations to the GPU, we've significantly sped up the training process for our stock price prediction model. This approach can be generalized to any machine learning task using PyTorch, making it a versatile tool for data scientists and developers.

Remember, the key steps for using GPU in PyTorch are checking GPU availability, moving your model to the GPU, and ensuring that your data is on the GPU during computations. Happy coding, and enjoy the power of GPU acceleration in your projects!