tumblr visitor stats

Connect

Email
Twitter
LinkedIn
Quora
RSS


September 21, 2011

The Information Dilemma

Today I spent a bunch of time at the Strata Conference in NYC catching some great speakers, speaking on a few panels and catching up with old friends. All in all, it was a productive and enjoyable day. 

However, one of the panels on which I participated, the Son of Money:Tech, was a revival of a terrific conference that was kicked off in 2008 but ultimately mothballed in the wake of the financial crisis. When the conference was held in 2008 it felt cool, integrating the discussion of what we now call the “Big Data value stack” into the realities of Wall Street, hedge funds and even consumer finance. It was the ultimate NYC conference. Fast forward to today. Our 45-minute panel discussion was just getting going when it sadly had to end, and we had just begun digging into some of the pressing issues of the day, e.g., historically high market correlations, the democratization of journalism coupled with the paradoxical importance of brand in an increasingly fragmented landscape, the massive barriers to entry in the low latency trading game, etc. It left both the audience and the discussion leaders wanting for more. In fact, I’d argue that a Money:Tech conference, circa 2012, would be far more thought-provoking and high-impact today than it was only three years ago. Cloud computing and storage is exponentially greater today than it was in 2008. Semantic technologies have continued to make large strides. Both the NoSQL movement as well as advances in relational databases have materially altered the face of web-scale and real-time analytics. And innovations in crowd sourcing and and the leveraging of contributory databases, together with machine learning, has further advanced the field of predictive analytics. 

Just musing about these topics in an unstructured manner during the panel made me think about how complicated the world has become. Making money is harder - and more costly - than ever before. Yes, more powerful open-source tools are available. Yes, the rise in open data has created analytical sandboxes the likes of which couldn’t be imagined even five years ago. Yes, the cloud has made massively scalable pay-as-you-go storage and processing accessible to even the earliest start-ups. However, as compute and data have become cheaper and more readily available, the data deluge has made it ever harder to extract tradable signal from the sea of content. Content that is structured and unstructured; in heterogeneous formats; with and without (affordable) historical archives; and assembled along different time scales. These are complex problems that often require teams of highly skilled data analysts and scientists to solve. The kinds of teams that are very expensive and available to only the very rich (think Bridgewater, Renaissance Technologies, Two Sigma, AQR, Citadel, Teza and Goldman Sachs). So does this mean that the mega-quants are so firmly entrenched that no one can hope to successfully compete against them? Well…

Hedge funds and Wall Street firms, like technology companies, can often become victims of their own success. It is very hard for the mega-quant fund to live on the razor’s edge printing hundreds of millions or billions of dollars of gains, year in, year out. There are always younger, scrappier, hungrier and more nimble competitors just looking for the smallest opportunity to earn their way into the club. It is hard to remain #1 in all aspects of what makes trading firms successful, and the most successful proactively disrupt themselves just as the most successful companies do (be they technology companies or widget makers). But it is a steep challenge. Then there is the issue of time scale. While there are very clear benefits of having vast resources to compete in the low latency end of the continuum (co-location, data teams, huge data budgets, etc.), as one moves towards the longer-term thematic long/short strategies the benefits of scale begin to melt away. Good research and good trade implementation, coupled with prudent risk management, can go a long way towards generating differentiated alpha. Co-location? Massive real-time data and historical data costs? Investment in real-time execution platforms? Not necessary. That said, these strategies are far more volatile and place more capital at risk than the low latency quant strategies. So as with anything, there is no free lunch.

These are just a few of the topics that surfaced during today’s panel. It was a pleasure kicking it around with my pals Rob Passarella (of Dow Jones) and Paul Kedrosky (of our Solar System), and I look forward to a redux in 2012. 

comments powered by Disqus