EnterpriseTechに載ってた記事、Wall Streat Wants Tech to Trade Smarter And Fasterがおもろかったのでいくつか抜粋。
If I can instead predict that I am going to be first in the queue 80 percent of the time, I think I can get there. A high frequency trader told me about a year ago that you have to consistently be among the top 30 in the queue, otherwise there is no way for you to make any kind of money. And only a few firms can afford to play at that level.
So today, the big players are starting to use machine learning and predictive analytics not only to do statistical arbitrage, but to predict where they can get to the front of the queue before they execute a trade.
To do such trading takes a lot of data, of course, as Alex Tsariounov, principal architect of the London Stock Exchange, explained. There is a tremendous amount of post-market data that gets chewed on, and it often takes two days to run simulations and risk analysis against that data. “The problem is that by the time you make decisions, the data is slightly stale. So what we want to be able to do is real-time analysis on that data and make that data available essentially as a market feed to trading participants.”
Fadi Gebara, a senior manager at IBM Research, said that Wall Street firms were taking a hard look at shared memory systems. After analyzing code at financial services firms, IBM has figured out that a lot of what applications are doing is shuffling around bits rather than chewing on them and on a cluster of X86 servers you can, as he put it, “get MPI’d to death getting all of that communication going.”
“Data movement is very expensive,” Gebara continued. “Programs are moving data left and right and all over the place, and we have found that 80 to 90 percent of what the program is doing is moving data.”
The risk management is embedded in one set of top-of-rack switches, these kink out to the core switches, and the symbol routing that was done in the gateway is pushed into the top-of-rack switches out above the matching engine. The data pre-processing and post-processing is pushed down into the network interface cards on the servers in the matching engine. When this is all done, the trade plant has a 40 percent reduction in server count, to 60 machines, and only needs 1,000 ports instead of 1,500 ports, a 33 percent reduction. More importantly, the trades can execute in 100 microseconds, a 33 percent reduction.