GPT-3 - Generative Pre-trained Transformer 3
Generative Pre-trained Transformer 3 in relation to Search Engine Optimisation is a technique that is
supposed predict what text (or topics/content) will be of interest to Internet users based on what was searched
for in the past.
This technique is designed to be used to generate content that will garner more "hits".
I have already voiced my concerns regarding the automatic generation of content for information seen on the
Internet. GPT will certainly add to the confusion on what is actual fact and what those that want to sell you
something would like you believe.
Along with Natural Language Processing and Generation (NLP and NLG), GPT is more attempts to "fool" the
Internet user and the Search Engines that there is useful information on a webpage/website.
Top
What the Wikipedia page said about GPT-3 at the time of writing: - March 17 2021
Pease visit the Wikipedia page for current thoughts. The text below is just included here for the sake of
discussion.
Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that uses deep learning to produce
human-like text. It is the third-generation language prediction model in the GPT-n series (and the successor to GPT-2)
created by OpenAI, a San Francisco-based artificial intelligence research laboratory. GPT-3's full version has a
capacity of 175 billion machine learning parameters. GPT-3, which was introduced in May 2020, and was in beta testing
as of July 2020, is part of a trend in natural language processing (NLP) systems of pre-trained language representations.
Before the release of GPT-3, the largest language model was Microsoft's Turing NLG, introduced in February 2020,
with a capacity of 17 billion parameters or less than a tenth of GPT-3s.
There is now a GPT-4 and a corresponding Wikipedia page.
NLP and NLG
The impact of Natural Language Processing (NLP) and Natural Language Generation (NLG) on SEO remains to be seen.
In Statistics
about the autoregressive (AR) model
In statistics, econometrics and signal processing, an autoregressive (AR) model is a representation of a type of
random process; as such, it is used to describe certain time-varying processes in nature, economics, etc.
The autoregressive model specifies that the output variable depends linearly on its own previous values and on
a stochastic term (an imperfectly predictable term); thus the model is in the form of a stochastic difference
equation (or recurrence relation which should not be confused with differential equation). Together with the
moving-average (MA) model, it is a special case and key component of the more general autoregressive–moving-average
(ARMA) and autoregressive integrated moving average (ARIMA) models of time series, which have a more complicated
stochastic structure; it is also a special case of the vector autoregressive model (VAR), which consists of a
system of more than one interlocking stochastic difference equation in more than one evolving random variable.
Contrary to the moving-average (MA) model,
the autoregressive model is not always stationary as it may contain a unit root.
According to Investopedia:
..... an Autoregressive Model?
A statistical model is autoregressive if it predicts future values based on past values. For example,
an autoregressive model might seek to predict a stock's future prices based on its past performance.
but..... what does this really have to do with AI?