An interesting debate took place at last week’s Cloud2020 gathering regarding the viability of futures markets for cloud computing capacity. I’m firmly at the skeptical end of the spectrum as the title of this post will attest. However, I had not given enough thought as to the reasons for my skepticism. Having reflected on it a little I’m more convinced than ever that any attempt to create either primary or secondary markets in cloud commodities is doomed. In short the lack of friction and a lack of volatility in matching cloud computing supply and demand means there is likely no window for 3rd party market makers to insert themselves into this value network. Continue Reading
Category : Political Economy
The NY Times published an interesting article today the impact of robotics on manufacturing jobs using Amazon’s purchase of robot maker Kiva Systems as an exemplar. The core question raised in the article is whether the adoption of automation technologies will create more unemployment or whether — as has been seen in the past — displaced workers will re-train and become employed in other productive areas of the economy. Continue Reading
“Cloud computing” is much more than simply a new set of technologies and business models. It is rapidly emerging as the platform that will underpin the next generation of digital products and services. Cloud Computing is transforming how consumers, companies, and governments store information, how they process and exchange that information, and how they utilize computing power. Consequently, it opens a new set of policy discussions while at the same time underlining the importance of old debates.
The Office of Governor Bob McDonnell of Virginia issued a press release today trumpetting their success in convincing Microsoft to build its latest Gen4 Data Center in the state. Given huge gulf between the real long-term local economic benefit of a data center like this and the content of the press release I am left to conclude that either A) The Governor really does not understand the economics of these things or B) He really does understand the true economic impact and the press release is a cynical attempt to fool the voters.
For other politicians who might be tempted to spend tax payers money to secure the locations of a data center here are some things you really do need to understand:
- Today’s modern global data centers do not create any meaningful number of jobs. Modern global data centers are remotely operated which means that even the few jobs that are created are not high skilled software or computer engineering roles. The type of jobs created tend to be in the building maintenance, HVAC, electrical engineering and security domains.
- There’s a high price to be paid for securing relatively few jobs. The fifty number touted by Virginia sounds about right. However, from the Governor’s press release, it sounds like at least $6.9 Million of direct incentives and probably an unspecified amount of long-term commercial property tax exemptions were used to secure those 50 jobs. Is that a good rate of return?
- When a technology company says they will invest $499 Million in building a new data center that does not mean they are investing $499 Million in your state. Far from it. Most of that cost will be in the computing infrastructure, communications, and other electrical machinery. That equipment is by and large not built in the state where the data center will be located. In fact a lot of it will be sourced from outside the country creating jobs and employment elsewhere, but not in your state.
- Modern data centers are ‘Shells’ of a building. Microsoft’s Gen 4 data centers employ pre-assembled container ‘Lego Blocks’ which are rolled onto a concrete slab and plugged into power, cooling and data pipes. If they could get away without building anything other than the concrete slab and HVAC containment buildings they would and in years to come they will. This all means that very few construction jobs will be created and certainly not any that require significant new up-skilling of your labor force.
- On a positive note a modern global data center is one of the most dense power consumption facilities around. That is why many sites are selected based on access to hydro-power where the cost of energy and distribution challenges can be kept to a minimum. It is possible that the location of a major global data center in Virginia will create the downstream need to upgrade the state’s electricity distribution and generation infrastructure. Being forced to do this obviously creates ancillary benefits for other company’s but again at what price?
Politicians really do need to understand the reality of these things. It is far too easy to give away the tax payers money today on the assumption of major downstream economic benefits tomorrow. In the case of data centers those benefits are likely to be few and far between.
TechCrunch’s article about Intuit’s decision to shut down Quicken OnLine and not let users migrate their data to the new service Mint.com is an outstanding example of the emerging problem of ‘Data Lock-In’ and an issue which regulators need to get ahead of before its too late.
As the market for ‘Cloud’ services takes off we’re going to witness an explosion of new services whose value proposition is based on managing some aspects of the consumers life on-line. The big buckets already are life-centric services such as health and fitness, personal finances and of course managing you social graph. The expanding roster of new services will be accompanied by an equally large number of failures. That is the natural order of things in new markets. Not all new entrants will find a business model which works or is sustainable over the long term. Regulators need to start thinking about the consumer protection regimes that need to be put in place as this happens.
I was struck by some of the debate surrounding the Intuit issue by the number of folks who think its not a big deal because Quicken On-Line is a ‘Free’ service. That could not be further off the mark. One of the challenges for any new entrant in the crowded market for ‘Cloud’ services will be figuring out how to make their service as ‘Sticky’ as possible. After having invested a great deal to acquire new customers you want to make sure its not easy for them to leave or switch to a competitors service (Ever wonder why its incredibly difficult to dump you mobile provider? It’s called ‘Churn’ and ‘ARPU’)
One way to build ‘Stickiness’ is to ensure that new customers have to invest a significant amount of personal effort to extract the full benefits of the service. In Quicken Online’s case that was the time required to upload banking information and months and months of entering individual financial transactions. Making it easy for customers to export their data from the service would undermine ‘Stickiness’ and therefore increase the likely hood that customers might switch to a competitor’s services. In the highly competitive ‘Cloud’ service market it will be very tempting to find ways to bind users to services by making it difficult for them to take their data and run.
Consumers need to be educated about the dangers of ‘Data Lock-In’ so that they increasingly demand Open Data Access (ODA) from ‘Cloud’ service providers. I think its also about time that government regulators started thinking about mandating ODA for all service providers. These regulations would be a natural extension of existing privacy regulations by enshrining the principle that personal information of any sort belongs to the consumer, not the provider, and the consumer has the right to take that data with them when they leave a service. This would be a direct analog to ‘Number Portability’ requirements in the mobile telephony sector. Service providers would be required to ensure that all consumer owned data can be exported from their service using open standard formats and protocols.
Such a regulatory requirement would protect consumers of the sort of mess now faced by Inuit’s customers. It would also force service providers to focus on broader value proposition of their service rather than relying on ‘Data Lock-In’ as a competitive lever. This focus on ‘Functional’ competition would be a great thing for consumers and the market.
The UK’s Royal Society has just announced a major new research project on the implications of population growth. The study is to be headed by the Nobel Laureate, Sir John Sulston. As the RI points out populations studies is quite a cyclical area of research. The ‘60s and ‘70s were a boom time for the field with less focus being given the the subject in the ‘80s and ‘90s. The fact that the topic is moving back up the political agenda probably has much to do with the intersection of population on two key policy issues; economic growth and climate change.
The political debate about population growth has being going on for nearly three hundred years. Thomas Malthus, seen as the father of demography, clearly held the view that exponential population growth would outstrip improvements in agricultural production, condemning economies to a permanent state of subsistence. Adam Smith on the other hand believed that economic growth and improvements in people’s socioeconomic circumstances would reduce fertility rates over time.
There’s a prima facie case supporting Smith’s assertions. The fertility rate of many of the world’s most developed economies (Much of Europe, UK, Japan etc. but notably not the United States due in large part to the impacts of immigration) has tumbled over the last forty years. That decline has mirrored a period of massive economic growth and rising living standards in these countries. The ‘Baby boom’ has become the ‘Baby bust’ with very significant implications for affected geographies. As the demographic mix inverts there are fewer and fewer younger workers able to carry the social cost of the rising number of elderly in the population.
Interestingly, even though the total global population is still growing the rate of growth has in decline since the mid-sixties. There is a diversity of research which shows that fertility rate has an inverse relationship with factors like educational level (Particularly in Women) and income level. If the RI study is to be useful in guiding policy makers then it needs to deliver some definitive research on which factors really do impact fertility rates, and in what manner.
There is a perspective (One I do not subscribe to) that a growing global population is required to sustain the needed levels of economic growth. The argument is put forward that continued growth in the production of manufactured goods will require and expanding workforce to make those goods. This gets to the heart of one of my personal interests regarding the impact of technology on economic growth. I would argue that the opposite scenario is more likely i.e. that even an increasing supply of manufactured goods will requires less, not more workers over time.
The number of workers now employed in building cars in the most advanced and automated factories today is diminishingly small compared to just thirty years ago. With continued advances in robotics, process control technology and information systems this trend is likely to continue. Fifty years from now its likely that many factories will ingest raw materials at one end and spit out finished good at the other with very few human workers required in-between.
To date this trend has been countered by the relative ease with which manufacturing facilities can be trans-located into low wage economies. However, the very act of introducing manufacturing into low wage economies has the effect of raising living standards and worker expectations which in turn leads to demands to higher wages. When combined with worker shortages this can place a significant upward pressure on wages undermining the location’s original cost advantage. China is now witnessing this phenomenon with many manufacturers now looking to move to other lower wage cost East Asian economies. In my view this will over time lead to a leveling of the global wage cost ‘Playing field’. As we reach that point the pressure to innovate in manufacturing automation will become extreme because that will be the only way to avoid being caught between the ‘Pin factory’ and the ‘Invisible hand’. As global manufacturing competition shifts from trying to find the lowest wage economy to finding the cheapest way to manufacture through automation we will see a dramatic reduction in the global manufacturing work force.
The challenges with this scenario are daunting. If there is truly a correlation between rising income levels and declining fertility rates then a long period of technology lead efficiency improvements in manufacturing and its concomitant positive impact on economic output should correlate with a continued slowing of the global population growth rate. If the point is reached where global population growth rate becomes negative then you enter into the potentially ‘Utopian’ situation where a smaller and smaller global population shares an ever growing economic pie. Alternatively, if the global manufacturing work force continues to shrink due to automation while the population continues to grow and those lost manufacturing jobs can not be more than offset by growth in services sector employment then you have a political time-bomb on your hands.
I seriously hope the RI study will look at the the correlation of population dynamics and economic growth and will factor in the role that technology will play on both sides of the equation. The likely impact of climate change on these issues I will leave for another post.
Mary-Jo Foley posted a provocative article yesterday about the reality of ‘Private Cloud’ offerings and who is driving the demand; customers or vendors?
As I was one of the people responsible for designing Microsoft’s Public Sector ‘Cloud’ strategy I have some opinions about this issue which I wanted to share.
It’s fair to say that until we started looking at the worldwide customer requirements for ‘Cloud’ in the public sector the whole issue of ‘Private Clouds’ was not a major part of the company’s overall ‘Cloud’ strategy. However, once you start looking at the requirements of public sector organizations outside the US you very quickly realize that standard ‘Public Cloud’ offerings will not cut it.
The majority of foreign governments have data sovereignty regulations which prohibit the storage and transport of data beyond the country’s borders. Many governments also have very serious concerns about the reach and implications of the US Patriot Act which requires any US based ‘Cloud’ provider to disclose any data held within their systems to the US Government, upon request, no matter where that data is stored. Obviously these requirements do not affect the provision of ‘Cloud’ services to the US government. However, providers will still need to ensure that US government data is not ‘Smeared’ across the provider’s global ‘Cloud’ infrastructure and is instead kept within data centers hosted in the US.
On face value these requirements might be seen as a complete barrier to the adoption of ‘Cloud’ by foreign governments. However, these challenges are balanced by a huge pressure to improve the efficiency of IT provision across the Public Sector. The extreme budgetary pressure being faced by many governments is forcing a re-evaluations of how IT services are delivered and at what costs. In this light the costs advantages of ‘Cloud’; scale, elastic and automated provisioning, pay-as-you-go, reduction in capital expenditure and consolidation of operations etc. are all highly attractive.
The only way to ‘Square the Circle’ is to offer a ‘Private Cloud’ solution i.e. a set of technologies which will let governments implement IT infrastructure which has ‘Cloud’ attributes but which can be kept separate from the ‘Public Cloud’ infrastructure and compliant with the country’s required policy, regulatory and security regimes.
The UK is a good example. The UK government is expecting all the advantages of ‘Cloud’ without the exposure of putting UK government data into the ‘Public Cloud’ infrastructure with all the exposure that implies. At the end of the day ‘Cloud’ is a particular approach to systems and workload ‘Management’ that delivers the benefits I’ve outlined above. Whether these benefits are delivered within a private datacenter or across a public infrastructure is really immaterial.
From a competitive perspective its also important to understand why players like Amazon and Google want to play down the relevance of ‘Private Cloud’. The public sector is a very large and important IT market. In many countries the government is the single largest spender on IT. Vendors who only offer ‘Public Cloud’ services are failing to meet the most basic needs of public sector customers outside the US. If you want to play in the public sector market around the world you will need both a ‘Public’ and ‘Private Cloud’ strategy.
It was interesting to see that having driven this set of requirements out of the public sector side of the business the concept of ‘Private Cloud’ started to find significant traction in the enterprise segment. There are plenty of large private sector companies who are not yet ready to move their sensitive data into the ‘Public Cloud’ and yet want the benefits of ‘Cloud’ workload management to drive efficiency in their IT service provision. In my view most large enterprise organizations will end up using a blend of both ‘Public’ and ‘Private’ cloud.
It has been fun being back in Beijing this week. My first visit to the city was in 1989 when the streets were still filled with bicycles. I’ve been back here about once a year since then yet the pace of progress on so many levels is still hard to comprehend. Growth of the ‘Starbucks Index’ alone is mind blowing and seems to be reaching Seattle or New York densities. The other thing I’ve noticed is the incredible focus on efficiency; from the three hour turn-around for a visa at the Chinese Consulate in Zurich (Try that at the US Embassy), to the immigration process and flow through the airport. Yes, the traffic is still a nightmare and seems to be getting worse. I don’t live here so I can imagine there is an alternate reality I’m not aware of. However, the general experience for visitors is pretty impressive.
I had a chance to catch up with Microsoft’s National Technology Officer Sean Zhang on Wednesday for a great evening of conversation. I’ll also be able to see Peter Moore and Michael Thatcher before I leave so that’s an added bonus.
The reason for the visit to Beijing was an invitation to present at the Chinese Academy of Governance today. The Academy is responsible for training senior civil servants in the PRC administration. My presentation is titled “IT Platforms and the Ecology of Innovation” (PDF copy here) and focuses on the how the evolution of IT platforms has enabled the development of the global service economy. I’ll take a historic look at how these platforms have evolved and then will talk about the coming disruptive effects of the ‘Cloud’ platform and those which follow.
This is presentation comes out of ongoing work with Prof. John Zysman and the team at UC Berkeley’s BRIE. One of the things which is central but still the center of a heated debate is how you define an IT Platform i.e. rigorously enough for it to be used as an analytical definition when looking at the surrounding political economy issues. The current definition I’m using is:
A consistent development environment supported by new software and hardware architectures, based on standards and available at scale, that enables service and business model innovation
This is not perfect and I’m open to suggestions about how to improve it. Any definition needs to be able to delineate historic platform transitions in a clear and defensible way and also be able to help identify when a new transition is taking place.
I’m looking forward to a interesting discussion and debate about the opportunities and policy challenges these new platforms will create in the Chinese context. Should be fun.