在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称(OpenSource Name):robertmartin8/MachineLearningStocks开源软件地址(OpenSource Url):https://github.com/robertmartin8/MachineLearningStocks开源编程语言(OpenSource Language):Python 100.0%开源软件介绍(OpenSource Introduction):MachineLearningStocks in python: a starter project and guideEDIT as of Feb 2021: MachineLearningStocks is no longer actively maintained MachineLearningStocks is designed to be an intuitive and highly extensible template project applying machine learning to making stock predictions. My hope is that this project will help you understand the overall workflow of using machine learning to predict stock movements and also appreciate some of its subtleties. And of course, after following this guide and playing around with the project, you should definitely make your own improvements – if you're struggling to think of what to do, at the end of this readme I've included a long list of possiblilities: take your pick. Concretely, we will be cleaning and preparing a dataset of historical stock prices and fundamentals using While I would not live trade based off of the predictions from this exact code, I do believe that you can use this project as starting point for a profitable trading system – I have actually used code based on this project to live trade, with pretty decent results (around 20% returns on backtest and 10-15% on live trading). This project has quite a lot of personal significance for me. It was my first proper python project, one of my first real encounters with ML, and the first time I used git. At the start, my code was rife with bad practice and inefficiency: I have since tried to amend most of this, but please be warned that some minor issues may remain (feel free to raise an issue, or fork and submit a PR). Both the project and myself as a programmer have evolved a lot since the first iteration, but there is always room to improve. As a disclaimer, this is a purely educational project. Be aware that backtested performance may often be deceptive – trade at your own risk! MachineLearningStocks predicts which stocks will outperform. But it does not suggest how best to combine them into a portfolio. I have just released PyPortfolioOpt, a portfolio optimisation library which uses classical efficient frontier techniques (with modern improvements) in order to generate risk-efficient portfolios. Generating optimal allocations from the predicted outperformers might be a great way to improve risk-adjusted returns. This guide has been cross-posted at my academic blog, reasonabledeviations.com Contents
OverviewThe overall workflow to use machine learning to make stocks prediction is as follows:
This is a very generalised overview, but in principle this is all you need to build a fundamentals-based ML stock predictor. EDIT as of 24/5/18This project uses pandas-datareader to download historical price data from Yahoo Finance. However, in the past few weeks this has become extremely inconsistent – it seems like Yahoo have added some measures to prevent the bulk download of their data. I will try to add a fix, but for now, take note that As a temporary solution, I've uploaded EDIT as of October 2019I expect that after so much time there will be many data issues. To that end, I have decided to upload the other CSV files: QuickstartIf you want to throw away the instruction manual and play immediately, clone this project, then download and unzip the data file into the same directory. Then, open an instance of terminal and cd to the project's file path, e.g cd Users/User/Desktop/MachineLearningStocks Then, run the following in terminal: pip install -r requirements.txt
python download_historical_prices.py
python parsing_keystats.py
python backtesting.py
python current_data.py
pytest -v
python stock_prediction.py Otherwise, follow the step-by-step guide below. PreliminariesThis project uses python 3.6, and the common data science libraries pip install -r requirements.txt To get started, clone this project and unzip it. This folder will become our working directory, so make sure you Historical dataData acquisition and preprocessing is probably the hardest part of most machine learning projects. But it is a necessary evil, so it's best to not fret and just carry on. For this project, we need three datasets:
We need the S&P500 index prices as a benchmark: a 5% stock growth does not mean much if the S&P500 grew 10% in that time period, so all stock returns must be compared to those of the index. Historical stock fundamentalsHistorical fundamental data is actually very difficult to find (for free, at least). Although sites like Quandl do have datasets available, you often have to pay a pretty steep fee. It turns out that there is a way to parse this data, for free, from Yahoo Finance. I will not go into details, because Sentdex has done it for us. On his page you will be able to find a file called Historical price dataIn the first iteration of the project, I used Likewise, we can easily use The code for downloading historical price data can be run by entering the following into terminal: python download_historical_prices.py Creating the training datasetOur ultimate goal for the training data is to have a 'snapshot' of a particular stock's fundamentals at a particular time, and the corresponding subsequent annual performance of the stock. For example, if our 'snapshot' consists of all of the fundamental data for AAPL on the date 28/1/2005, then we also need to know the percentage price change of AAPL between 28/1/05 and 28/1/06. Thus our algorithm can learn how the fundamentals impact the annual change in the stock price. In fact, this is a slight oversimplification. In fact, what the algorithm will eventually learn is how fundamentals impact the outperformance of a stock relative to the S&P500 index. This is why we also need index data. Preprocessing historical price dataWhen However, referring to the example of AAPL above, if our snapshot includes fundamental data for 28/1/05 and we want to see the change in price a year later, we will get the nasty surprise that 28/1/2006 is a Saturday. Does this mean that we have to discard this snapshot? By no means – data is too valuable to callously toss away. As a workaround, I instead decided to 'fill forward' the missing data, i.e we will assume that the stock price on Saturday 28/1/2006 is equal to the stock price on Friday 27/1/2006. FeaturesBelow is a list of some of the interesting variables that are available on Yahoo Finance. Valuation measures
Financials
Trading information
ParsingHowever, all of this data is locked up in HTML files. Thus, we need to build a parser. In this project, I did the parsing with regex, but please note that generally it is really not recommended to use regex to parse HTML. However, I think regex probably wins out for ease of understanding (this project being educational in nature), and from experience regex works fine in this case. This is the exact regex used: r'>' + re.escape(variable) + r'.*?(\-?\d+\.*\d*K?M?B?|N/A[\\n|\s]*|>0|NaN)%?(</td>|</span>)' While it looks pretty arcane, all it is doing is searching for the first occurence of the feature (e.g "Market Cap"), then it looks forward until it finds a number immediately followed by a
Both the preprocessing of price data and the parsing of keystats are included in python parsing_keystats.py You should see the file BacktestingBacktesting is arguably the most important part of any quantitative strategy: you must have some way of testing the performance of your algorithm before you live trade it. Despite its importance, I originally did not want to include backtesting code in this repository. The reasons were as follows:
Nevertheless, because of the importance of backtesting, I decided that I can't really call this a 'template machine learning stocks project' without backtesting. Thus, I have included a simplistic backtesting script. Please note that there is a fatal flaw with this backtesting implementation that will result in much higher backtesting returns. It is quite a subtle point, but I will let you figure that out :) Run the following in terminal: python backtesting.py You should get something like this:
Again, the performance looks too good to be true and almost certainly is. Current fundamental dataNow that we have trained and backtested a model on our data, we would like to generate actual predictions on current data. As always, we can scrape the data from good old Yahoo Finance. My method is to literally just download the statistics page for each stock (here is the page for Apple), then to parse it using regex as before. In fact, the regex should be almost identical, but because Yahoo has changed their UI a couple of times, there are some minor differences. This part of the projet has to be fixed whenever yahoo finance changes their UI, so if you can't get the project to work, the problem is most likely here. Run the following in terminal: python current_data.py The script will then begin downloading the HTML into the Stock predictionNow that we have the training data and the current data, we can finally generate actual predictions. This part of the project is very simple: the only thing you have to decide is the value of the python stock_prediction.py You should get something like this:
Unit testingI have included a number of unit tests (in the I thus recommend that you run the tests after you have run all the other scripts (except, perhaps, To run the tests, simply enter the following into a terminal instance in the project directory: pytest -v Please note that it is not considered best practice to include an Where to go from hereI have stated that this project is extensible, so here are some ideas to get you started and possibly increase returns (no promises). Data acquisitionMy personal belief is that better quality data is THE factor that will ultimately determine your performance. Here are some ideas:
Data preprocessing
Machine learningAltering the machine learning stuff is probably the easiest and most fun to do.
ContributingFeel free to fork, play around, and submit PRs. I would be very grateful for any bug fixes or more unit tests. This project was originally based on Sentdex's excellent machine learning tutorial, but it has since evolved far beyond that and the code is almost completely different. The complete series is also on his website. For more content like this, check out my academic blog at reasonabledeviations.com/. |
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论