Loading…
Scroll to explore exactly what each machine learning model is trying to do for you as a trader. No math, just plain-English intuition and visual examples.
Draws a straight line through your indicators to estimate the next move.
Linear Regression draws a straight line through past price data to estimate where price might go next. It gives each indicator a weight and combines them into a single forecast. It is simple, fast, and very transparent about what it is doing.
You want a clean performance baseline and a simple reality check for more complex models. It is not meant to be a standalone live trading signal.
Turns your indicators into a probability that tomorrow is up or down.
Logistic Regression predicts a probability instead of a price. It answers the question: how likely is tomorrow to be an up day versus a down day? The output is a confidence score between 0% and 100% that you can use as a trade filter.
You want an interpretable filter on top of a rule-based strategy. It can struggle when the market behaves in very non-linear ways.
Pure rule-based crossovers like the classic Golden Cross and Death Cross.
Moving Average models are the simplest kind of strategy. There is no machine learning at all. When a short-term average crosses above a long-term average, they buy. When it crosses below, they sell. The rules are clear and fixed from day one.
Markets are trending strongly in one direction. These rules tend to perform poorly when price chops sideways without a clear trend.
Forecasts the next price move using only the history of the price itself.
ARIMA is a classical forecasting method that looks only at the price series. It asks whether yesterday’s move tells you anything about tomorrow’s move, without using extra indicators. It focuses on patterns in how price tends to trend or snap back.
You want to benchmark whether your indicators add real value beyond raw price history alone.
A flowchart of yes/no questions that ends in Buy, Sell, or Hold.
A Decision Tree is a flowchart of simple questions about today’s indicators. It might ask if RSI is above 70, or if volatility is high. Each answer leads to another question until the path ends in a Buy, Sell, or Hold decision.
You want to diagnose which individual indicators matter most for a ticker. By itself it is usually too simple to act as a reliable standalone signal.
Combines many small trees so no single rule can dominate the signal.
Random Forest runs many Decision Trees at once, each trained on slightly different slices of history. Each tree votes on Buy, Sell, or Hold, and the model combines those votes. One odd tree cannot derail the overall result.
You want a robust, stable signal that does not rely on one narrow pattern or one lucky period.
Draws the widest possible boundary between bullish and bearish conditions.
A Support Vector Machine looks at all indicators at once and tries to draw the widest possible gap between up days and down days. It pays special attention to the hardest days near the boundary, where the decision is not obvious.
You have a rich, well-prepared feature set and want a strong classical model, understanding that training can slow down on very long histories.
Builds trees that keep fixing their own mistakes until the signal is sharp.
XGBoost builds trees one after another, where each new tree focuses on correcting the mistakes of the previous ones. After many rounds of self-correction, the ensemble becomes a strong, focused signal. It is fast and well suited to indicator-style data.
You are working with tabular indicator data and want a strong, battle-tested model for financial prediction tasks.
Splits price into trend, seasonality, and special events to spot recurring patterns.
Prophet decomposes price history into a trend line, seasonal patterns, and one-off events. It is designed to catch regular rhythms in the data, like a stock that reliably strengthens in certain months or around known calendar events.
There is genuine seasonality in a market or index. For many individual stocks this is rare, so Prophet is best treated as a specialized tool.
Keeps only the indicators that truly help by shrinking weak ones toward zero.
These are Linear Regression models with a built-in self-discipline. If an indicator is not genuinely useful, its weight gets pulled toward zero, and in some cases removed entirely. This keeps the model from latching on to noisy signals.
You have many indicators and suspect that only a handful are truly pulling their weight.
Searches history for days that looked like today and asks what happened next.
K-Nearest Neighbors looks back through history to find the days that most closely resemble today in terms of RSI, volatility, momentum, and other features. It then checks what typically happened right after those similar days.
Your indicators capture market state well and you are comfortable with slower responses on very long histories.
Adds up simple probabilities from each indicator to get a quick up-or-down view.
Naive Bayes treats each indicator as giving its own simple opinion about the chance of an up day. It multiplies those opinions together to get a final probability. It is extremely fast and surprisingly effective when the signals are clean.
You want quick probability estimates on smaller datasets and do not need a heavy, complex model.
A basic neural network that mixes your indicators through several layers.
An MLP is a straightforward neural network with layers of connected nodes. It takes today’s indicators, transforms them through several stages, and produces a Buy or Sell style output. It can capture patterns that simple straight-line models miss.
You have a rich feature set and access to GPU training, and you want to explore deep learning without sequence memory.
A modern network that blends past prices, known future events, and stock facts.
The Temporal Fusion Transformer can look at many types of inputs at once: historical prices, known future events like earnings dates, and static facts about the stock. It uses attention to focus on the most important parts of the past when making its forecast.
You have large amounts of data and complex forecasting needs. It is a forward-looking model for later phases of the roadmap.
Reads price history one day at a time but forgets quickly beyond the recent past.
An RNN reads price history like a sentence, one day at a time, carrying a short memory of what it has seen. In practice that memory fades quickly, so it mainly focuses on the most recent couple of weeks.
You want to understand why stronger sequence models like the LSTM were introduced. It is largely a stepping stone rather than the final choice.
A sequence model that can remember important information across many weeks.
The LSTM is a smarter version of the RNN with gates that decide what to remember, what to forget, and what to output. It can hold onto relevant information across dozens of trading days without letting everything blur together.
Sequential patterns really matter and you have enough history to train on. This is the main deep learning workhorse in the platform.
A leaner sequence model that aims for LSTM-level insight with less overhead.
The GRU is a lighter variant of the LSTM that keeps the core idea of gated memory but with fewer moving parts. It aims to capture similar sequential patterns while training faster and using less memory.
You want LSTM-style behavior but need faster training for on-demand or large-batch runs.
Scans price history for short repeating patterns like a pattern detector.
A CNN for time series slides small filters across recent price history to spot local patterns. It works much like an image model that detects edges and shapes, but here it searches for short candlestick and volatility patterns.
You want to combine it with a sequence model like an LSTM in a hybrid architecture.
Lets a CNN spot local patterns while an LSTM tracks how they unfold over time.
In the CNN + LSTM hybrid, the CNN first finds local shapes in recent price action. The LSTM then reads how those shapes appear and fade over longer windows. Together they capture both short bursts and longer arcs in the data.
Both short-term behavior and multi-week context matter at the same time. It is a forward-looking model for later roadmap phases.
Looks across the entire history at once and jumps straight to the most relevant days.
Rather than reading history one day at a time, the Transformer looks at all days together and learns which past moments matter most for today’s decision. It is like opening a book and flipping directly to the few pages that matter.
You have large datasets and care about long forecast horizons. It is a cutting-edge, future-focused model.
Learns to trade by trying actions in a simulator and seeing which ones pay off.
A DQN agent learns by trial and error in a simulated market. It tries Buy, Hold, and Sell thousands of times and tracks which choices tend to lead to better long-term outcomes. Over time it builds up a playbook that maps market states to actions.
You want a model that learns a full trading policy instead of just predicting next-day direction.
A stable RL agent that refines a complete trading strategy over many play-throughs.
PPO learns a trading strategy the way a player learns a video game: by playing it repeatedly and adjusting what works. It is designed to make steady, safe updates so that new training steps rarely wreck a working policy.
You want the strongest and most stable reinforcement learning baseline for a trading policy.
Trains a decision-maker and a critic together so the trading plan improves quickly.
A2C trains two parts side by side: an actor that chooses actions and a critic that judges how good the situation is. The critic’s feedback helps the actor learn faster, at the cost of a bit less stability than PPO.
You need rapid iteration across many tickers and can accept slightly more volatility in training outcomes.
Predicts both the likely move and how confident it is, shown as a changing band.
A Gaussian Process predicts a range for the next move rather than a single point. When it is unsure, that range widens. When it has seen many similar conditions before, the range narrows and confidence rises.
Understanding uncertainty and sizing positions based on confidence is more important than nailing the exact next move.
Sits on top of a rule-based strategy and filters out its weakest trade ideas.
Meta-Labeling does not predict the market directly. Instead, it watches a rule-based strategy generate signals and learns which of those signals were worth taking. High-confidence signals are allowed through; low-confidence ones are filtered out.
A rule-based strategy is generating too many trades and you want a smart filter that keeps the best ideas and quietly drops the rest.