Bell,
D., and L. Gana. "Algorithmic Trading Systems: A Multifaceted View of
Adoption". System Science (HICSS), 2012 45th Hawaii International
Conference on. Web.
“Algorithmic Trading Systems: A multifaceted view of
adoption” provided a quick overview of the recent evolutions in how the market
works. The market has progressed from being a group of people that meet and
agree on how much to trade of their companies for how much to an incredibly
quick, complex, and global system that allows anyone to (nearly) instantly
trade shares from (nearly) anywhere. Now
it is possible for not only people to do this but also computers. This article
presents the experience that IT professionals have had in the adoption of such
systems. Topics like: The expense of maintaining a data center that is
guaranteed to be up all of the time, being able to scale to exponentially more
trades then the system was originally built for, and compliance with federal
regulations.
This article was only vaguely on the topic I was hoping
for, the implementation in software of such systems not the adoption of
software in to IT infrastructure. Though it did have a nice little description
of some of the evolution of algorithmic trading over the past few years at the
beginning.
Calafiore,
G. C., and B. Monastero. "Experiments on Stock Trading Via Feedback
Control". Information and Financial Engineering (ICIFE), 2010 2nd IEEE
International Conference on. Web.
THIS
PAPER NEEDS REVIEW there is useful information here.
"Experiments on Stock Trading Via Feedback
Control". Explores and describes the Barmish-Iwarere (BI) trading algorithm.
The paper begins by describing some background information used in BI. First
Brownian motion is quickly review as being: a Markov process, having
independent increments, and normally distributed over time. Second the Ito
process is described as being a composite
of the Wiener process and Brownian motion. The trading system being explored is
then described as being composed of a trigger and a controller. The trigger
tells the controller when to take and the controller decides how aggressive to
take the given action. An Ito process was used to test the system. The trigger
takes action if any of the following conditions are true: “confidence” in the
stock is at the lower tolerance level, or the stock is significantly high then
the drift or volatility would normally allow for (a market imbalance seems to
have been detected). It is indicated that this process is very well optimized
but the problem of how to optimize the amount of a risky investment is an open
problem. The possibility of using an optimal Kelly fraction (or the Latane
strategy) is then explored. The results of the research are then explored with
the conclusion that BI is moderately effect and fairly predictable.
This article was interesting. It helped me find some
more terms (listed below) that may help me in understanding the concepts
necessary for effective development, evaluation, and discussion of automatic
trading systems. The concepts of the Ito process and approximations of the
Black-Scholes model seem to be particularly important.
Further research is needed to determine what the
following terms are: Wiener process, drift (in the context of stock trading),
Brownian motion, optimal Kelly fraction, Latane strategy, Black-Scholes model.
Hayward,
S. "Setting Up Performance Surface of an Artificial Neural Network with
Genetic Algorithm Optimization: In Search of an Accurate and Profitable
Prediction of Stock Trading". Evolutionary Computation, 2004. CEC2004.
Congress on. Web.
THIS
PAPER NEEDS REVIEW there is useful information here particularly about what
predictors to use.
"Setting up Performance Surface of an Artificial
Neural Network with Genetic Algorithm Optimization: In Search of an Accurate
and Profitable Prediction of Stock Trading" talk about various prediction
methods used in evolutionary/ artificial neural network (E/ANN). First the
problem is modeled, that being the composition of various (E/ANN) methods and
prediction methods to make a decision on whether a trigger should be raised and
how much to invest if so. The model and variables that this paper is using to
describe the market is then reviewed. Next, the method used to determine the
optimal predictor in the context of this paper is reviewed (in this case to use
another machine learning algorithm, the merits of which are quickly debated
against other machine learning methods). The parameters determining the scope
of the review of results were defined. The article concluded as not determining
any “best” predictor.
First this article is in a terrible font. The article
has some interesting points about how to do analysis on various components of a
E/ANN algorithm. While this is not the point of the article it does seem to be
something that might be worth duplicating or using as a reference when
comparing methods that different researchers used. The articles it cites are
also interesting looking I will have to look them up.
Further research is needed to determine what the
following terms are: Surface optimization (in the context of genetic
algorithms), autocovarience (which against words belief is a word), Posterior
Optimal Rule Signal (PORS), (Backpropagation
(another real word) in the context of online machine learning).
Iokibe,
T., S. Murata, and M. Koyama. "Prediction of Foreign Exchange Rate by
Local Fuzzy Reconstruction Method". Systems, Man and Cybernetics, 1995.
Intelligent Systems for the 21st Century., IEEE International Conference on. Web.
THIS
PAPER NEEDS REVIEW there is useful information here particularly about the
application of chaos theory to fiscal situations.
"Prediction of Foreign Exchange Rate by Local
Fuzzy Reconstruction Method" primarily reviews three topics: predicting
timeseries data and deterministic chaos, Takens’ embedding theorem, local fuzzy
reconstruction. Deterministic chaos is defined as being a system that is
seemingly chaotic yet is generated by a deterministic source. Takens’ embedding
theorem is a method of determining the location of a attractor in a chaotic
system. A visual example of how this can apply to a two dimensional data source
is also presented. Finally the concept of local fuzzy reconstruction is
introduced (LFRM). LFRM is a much less expensive way and simpler to calculate
with less variables the next probable state in a deterministically chaotic set
of behaviors. The article concludes after reviewing a experiment that the
system is sufficiently accurate to be used in short term predictions.
As with most things involving chaos (in the
mathematical since) attractors are discussed and it seems to me that using
strange attractors in the context of predicting the stock market is a
remarkably good idea. Also the mention of Takens’ theorem is very intriguing
and will lead to further research. The idea of remodeling the stock exchange as
a multi-dimensional data source also seems like a good idea to me. It makes me
wonder if this could be extended to work with longer term predictions or used
in concert with other methods to effectively make predictions.
Further research is needed to determine what the
following terms are: dynamical, deterministic chaos in a general setting,
better understanding of fuzzy logic.
Side note: I don’t care what the world says dynamical
IS NOT a word.
Kendall,
G., and Y. Su. "Learning with Imperfections - a Multi-Agent Neural-Genetic
Trading System with Differing Levels of Social Learning". Cybernetics
and Intelligent Systems, 2004 IEEE Conference on. Web.
THIS
PAPER NEEDS REVIEW there is useful information here particularly about the
application of chaos theory to fiscal situations.
"Learning with Imperfections - a Multi-Agent
Neural-Genetic Trading System with Differing Levels of Social Learning"
presents the paradigm of the market being such a complex system that any
perceptions that we or computers can make of it are imperfect. This makes the market an imperfect system
(from any useful point of view). The paper also explores how a multi-agent
system that communicates with it’s self behaves in this context. First the
research is introduced reviewing the components of the research: finding a
evolutionary algorithm that not only can find a optimal solution but adapt to
the non-static fitness space that is present in the market, and the fact that
no matter how much data you provide an agent with it is not possible for them
to create a completely accurate predictive model that can be used (imperfect
environment) and that this will cause each agent to perceive the environment a
unique (possibly useful) way. Next optimization problems (and ideas to overcome
them) in dynamic environments are discussed. The primary idea here is that
having multiple agents each which evolve to be more effective at smaller
problems and then share their knowledge with each other (though not necessarily
with the next generation to prevent local optima) may be effective. Then two
models of how to do this are discussed. Next the algorithms used in each agent
are reviewed, in this case a neural-genetic hybrid algorithm. The rules of the
system used to simulate these agents are then described. Following this, how
social learning and individual learning work in the context of this experiment
is shown in detail. The article concludes with a short description of where
further research may continue and infers that this is a very feasible, though
imperfect solution.
I think that the idea of using agents that only evolve
a solution to a small subset of the problem is brilliant and needs to be
extended to not only just creating different points of view on how the
environment works but also to be applied in situations that are carefully
selected (by another algorithm) to be a situation that the algorithm will excel
in. The idea of them communicating with each other also seems to be very useful
and infers that it may be a good idea to have multiple agents looking at any
given situation, just like you would have a team of people look at a hard
problem. The idea of imperfect environments is one that I think can be applied
to many situations because so many real world problems are too complex to
accurately model, ways to deal with this may be part of the answer to how to
effectively deal with the market.
Xiaohua
Wang, P. K. H. Phua, and Weidong Lin. "Stock Market Prediction using
Neural Networks: Does Trading Volume Help in Short-Term Prediction?".
Neural Networks, 2003. Proceedings of the International Joint Conference on. Web.
"Stock Market Prediction using Neural Networks:
Does Trading Volume Help in Short-Term Prediction?" explores the
relationship between trading volume and the forecasting abilities of neural
networks. Xiaohua concludes that though recent studies show that there is a
significant bidirectional nonlinear relationship between stock return and
trading volume the effect on the forecasting abilities of neural networks is at
best irregular and at worst consistently bad because of overfitting. Xiaohua
reviews a myriad of studies all concluding that there *is* a relationship
between trading volume and stock return when using neural networks though since
it is nonlinear it may be useless to trading models. Xiaohua also introduces a
study that shows when using a statistical model of the exchange a consistent
improvement can even be attained by tweaking trading volume. The experiment
that was used to study the relationship was as follows: three three-layer
feedforward time delay neural network (TDNN) as proposed by references 3 and 10
were trained using different market exchange data then the networks were set
loose on the exchange and compared using the mean absolute percentage errors
(MAPE) and mean square error statistics (MSE). The experiment was then repeated
with different trading volumes. Xiaohua then disuses the possible reasons for
this result and possible extensions to the research, the possibility that the
relationship being observed is to weak to be accurately modeled by neural
networks and that the case is probably the same for most (if not all) other fundamental
factors that can be observed about the market. Xiaohua suggest the solution to
this may be to use multiple factors so that a stronger (though possibly more
erratic) relationship can be modeled.
This study supports the hypothesis that I am developing
that it is not possible to model the market using any single factor in an
accurate enough way that it will be useful and that the only way that that a
consistent improvement may be acquired is to combine the results of many
different factors (like discussed in the previous reference). The point that in
different markets the factors have different degrees of causality is also
explored in this study and I think that points to the idea of using agents that
are trained to specific kinds of situations being one that may be promising.
Grossklags,
J., and C. Schmidt. "Software
Agents and Market (in) Efficiency: A Human Trader Experiment." Systems,
Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on
36.1 (2006): 56-67. Web.
In "Software Agents and Market (in) Efficiency: A
Human Trader Experiment." Grossklags and Schmidt (G and S) explore the
relationship between traders having knowledge that computers were trading
against them and how well they did while trading. G and S showed that when the
traders had knowledge of the traders they were significantly more efficient
with their trades especially in regards to factors that triggered trades by the
agents. G and S first review a similar situation to the market where traders
and computers interact, ebay, the point that computers can more effectively
exploit the lack of knowledge that some users possess effectively when the
users have no knowledge of how software works is shown. G and S state that the
purpose of the paper is to show that a similar situation will occur when
observing the market and to explore the psychological drivers of human-agent
interaction. G and S then review work that is similar to theirs all of which
confirmed that there may exist a correlation between the knowledge of the
agents to the traders and the profits that the traders made. The contribution
of the introduction of the information variable to the system and it’s
relationship to the traders profits is then disused. The algorithm used by the
agents is called “passive arbitrage-seeking” or arbitrageur and was developed
by Grossklags, it is effective in situations where traders are much more
prevalent then agents and acts on the imperfections of human traders. The
arbitrageur can be consider a passive, and rather parasitic agent. The agent is
“dumb” and does not evolve at all, though it still manages to be a fairly
effective agent.
The most interesting part of this study to my work is
the agent that was used, the “arbitrageur”. This is a contradiction to what I
previously believed (that it is not possible to create a deterministic strategy
that can make a profit in a given market situation) and point me towards a more
dynamic strategy where an evolutionary algorithm is used to determine what
determinist algorithm should be used for that situation. The change in the
humans behavior depending on the knowledge of agents being involved in the
system is also interesting though I don’t feel that it will be particularly
useful to the research I am doing. I will have to review the paper by
Grossklags about his trader.
Kampouridis,
M., Shu-Heng Chen, and E. Tsang. "Investigating the Effect of Different GP
Algorithms on the Non-Stationary Behavior of Financial Markets".
Computational Intelligence for Financial Engineering and Economics (CIFEr),
2011 IEEE Symposium on. Web.
"Investigating the Effect of Different GP
Algorithms on the Non-Stationary Behavior of Financial Markets" extends
previous research that showed that that the market was non-stationary and that
a GP algorithm that was not able to adopt new strategies for different market
conditions would decrease in performance and become obsolete. Tsang concluded
that this is not only the case for the particular algorithm that was used in the
original study but for any algorithm that does not coevolve with the market
though no evidence was found that the GPs that do not coevolve will continually
decrease in performance. This conclusion was drawn from using a more extensive
set of GPs and applying them to 10 different markets. The Authors determined
that agents can be divided into two different categories: N-type models (agents
that have preset market strategies (like the pervious papers algorithm) and
must chose in a deterministic manner what one to use) and Santa-Fe Institute
(SFI) (agents that can create strategies called novelties that work for a short
time in a very specific situation, these agents can effectively coevolve with
the market) like ones. The first GP algorithm that was used was called a simple
GP algorithm and was inspired by a tool called EDDIE. The simple GP observed
the following variables: Moving Average (MA), Trader Break Out (TBR), Filter
(FLR), Volatility (Vol), Momentum (Mom), and Momentum Moving Average (MomMA).
Each indicator used by the simple GP had two different periods, 12 days and 50
days. The simple GP then used the variables and their success in prediction to
generate a tree using a BNF grammar for evolutionary and diagnostic purposes.
The second GP used is called EDDIE 7. EDDIE 7 is an extension of the simple GP
but also includes a constrained fitness function that is more stringent. The
final GP used is called EDDIE 8 which extends EDDIE 7 with the ability to more
accurately determine what time frame for each variable should be used. The
authors concluded that because all of these different algorithms came up with
similar results of being able to remain effective after a long period of time
trading that a GP that can coevolve is a much better choice for a moving
market.
This study confirms that I need to use an SFI like
algorithm and does not in any way say that using it in conjunction with at
N-types algorithm would be detrimental as long as the algorithm can also
develop new strategies to use. The list of variables to use as indicators is
also interesting, because the mass of statistics available for market analysis
is overwhelming and having a starting place will be extremely helpful.
Wellman,
M. P., et al. "Trading Agents Competing: Performance, Progress, and Market
Effectiveness." Intelligent Systems, IEEE 18.6 (2003): 48-53. Web.
In the article "Trading Agents Competing:
Performance, Progress, and Market Effectiveness." Wellman compares various
trading agents that competed in the annual Trading Agent Competition. After
three years of competitions Wellman feels that results from the competitions
are fairly well related to the actual (real-world) performance of the traders
used. The game that the trading agents competed in involved picking a set of
parameters that needed to be optimized about a vacation that a theoretical
client wanted to take, but the agents had to bid on each item against each
other thereby creating a market (similar to the stock market). The conclusions
of the study determined that Walverine (MIT’s entry) was optimal at optimizing
individual agent success. The paper also observed that though a algorithm may
be good at optimizing the success of an agent it did not necessarily help the
whole group of agents and sometimes was even detrimental.
This article was nearly useless as it did not actually
explore why any algorithm was better then others it simply reviewed results and
the effect of traders on each other. Furthermore because of it’s focus on a
market ran only by agents it is impossible to determine how useful the results
of the research would actually be in a general market situation.
Saad,
E. W., D. V. Prokhorov, and D. C. Wunsch II. "Comparative Study of Stock
Trend Prediction using Time Delay, Recurrent and Probabilistic Neural
Networks." Neural Networks, IEEE Transactions on 9.6 (1998):
1456-70. Web.
THIS
PAPER NEEDS REVIEW there is useful information here about how all three of the
neural networks studied are constructed.
In "Comparative Study of Stock Trend Prediction
using Time Delay, Recurrent and Probabilistic Neural Networks." Sadd
explores minimizing the number of low false alarm predictions using time delay,
recurrent and probabilistic neural networks (TDNN, RNN, and PNN) where TDNN and
RNN use Kalman filter training. Sadd observes that short term predictions are
much easier to do reliably with neural networks and that a attempts to profit
on short term positions have to compensate for rists, taxes and transaction
costs. Sadd has in previous papers determined that all three of the neural
networks being used can effectively take into account both observations. Sadd
describes TDNNs as networks that are three layers deep the first layer takes
the input the second observes patterns and the third sums results. Sadd then
references two methods of training the TDNN. The second neural network (PNN),
this network memorizes patterns that it has seen in the market before then
calculates the probability of that pattern happening in the current situation
effectively simulating a Bayesian network but in a much cheaper manner. The
final network (RNN) is then discussed as a NN that can represent and encode
deeply hidden stats of the situation it is and has observed. They system that
is used to train the network is called the extended Kalman filter (EKF). The
conclusion of the paper is that each network has situations where they are
optimal. TDNN is a good general purpose solution and is not particularly
complex to implement and is not very heavy on the memory requirement. PNN is
extremely simple to implement and is very good at not creating false alarms,
and is more effective with stocks that are not particularly volatile (eg.
apple). Finally RNN is the most powerful and has a remarkably low false alarm
rate but is very difficult to implement.
This paper is a great review of the possible NNs to use
when working with short term predictions and will probably be a good point for
me to start when I actually implement an algorithm. The paper is also good
evidence that short term prediction may be the best method to use for naïve
algorithmic structures. After reading this paper I conclude that I will use a
GP algorithm in conjunction with a PNN or RNN to predict stock and report on
whether this is a good method.
No comments:
Post a Comment