2011 m. liepos 23 d., šeštadienis

Minsky crisis

Minsky crisis

L. Randall Wray
From The New Palgrave Dictionary of Economics, Online Edition, 2011
Edited by Steven N. Durlauf and Lawrence E. Blume

This entry examines the approach of Hyman P. Minsky to financial crisis. Minsky famously developed an ‘investment theory of the cycle and a financial theory of investment’. His thesis was that, over the course of the cycle, behaviour changes in such a way that financial fragility develops. This makes a financial crisis more likely. When the global financial crisis hit in 2008, many commentators returned to the theories of Minsky, calling it a ‘Minsky crisis’ or a ‘Minsky moment’. This entry agrees that Minsky deserves credit for identifying the processes that led up to the crisis. However, it is not sufficient to narrowly constrain the analysis to the transition that occurred over the past decade or so. Beginning in the 1980s and through to his death in 1996, Minsky had been arguing that a new form of capitalism had appeared, which he called ‘money manager capitalism’. In important respects it reproduced the conditions that Hilferding had called ‘finance capitalism’ in the early 20th century – a form of capitalism that collapsed into the Great Depression. What Minsky was arguing was that an extremely unstable form of capitalism had emerged – one based on what is often called financialisation of the economy. He (rightly) feared that it would ultimately lead to a great crash. The rest of the entry looks at Minsky's proposals for reforms that would help to promote stability. Yet, as Minsky always said, stability is destabilising.

financial instability hypothesis; global financial crisis; Hyman Minsky; money manager capitalism; self-regulating markets; stability is destabilizing


Stability is destabilizing. Those three words capture in a concise manner the insight that underlies Minsky’s analysis of the transformation of the economy over the entire post-war period. The basic thesis is that the dynamic forces of the capitalist economy are explosive so that they must be contained by institutional ceilings and floors – part of the ‘safety net’. However, to the extent that the constraints successfully achieve some semblance of stability, that will change behaviour in such a manner that the ceiling will be breached in an unsustainable speculative euphoria. If the inevitable crash is cushioned by the institutional floors, the risky behaviour that caused the boom will be rewarded. Another boom will build, and its crash will again test the safety net. Over time, the crises become increasingly frequent and severe until finally ‘it’ (a great depression with a debt deflation) becomes possible.
While Minsky’s ‘financial instability hypothesis’ is fundamentally pessimistic, it is not meant to be fatalistic (Minsky, 1975, 1982, 1986) According to Minsky, policy must adapt as the economy is transformed. The problem with the stabilizing institutions that had been put in place in the early post-war period is that they no longer served the economy well by the 1980s, as they had not kept up with the evolution of financial institutions and practices. Further, they had been purposely degraded and even in some cases dismantled, often on the erroneous belief that ‘free’ markets are self-regulating. Indeed, that became the clarion call of most of the economics profession after the early 1970s, based on the rise of ‘new’ classical economics with its rational agents and instantaneously clearing markets and the ‘efficient markets hypothesis’ that proclaimed prices fully reflect all information about ‘fundamentals’. Hence, not only had firms learned how to circumvent regulations and other constraints, but policymakers had removed regulations and substituted ‘self-regulation’ in place of government oversight.
From his earliest writings in the late 1950s to his final papers written before his death in 1996, Minsky always analyzed the financial innovations of profit-seeking firms that were designed to subvert New Deal constraints. For example, he was one of the first economists to recognize how the development of the federal funds market had already reduced the Fed’s ability to use reserves to constrain bank lending, while at the same time ‘stretching’ liquidity because banks would have fewer safe and liquid assets should they need to unwind balance sheets (Minsky 1957). And much later, in a remarkably prescient piece in 1987, Minsky had foreseen the development of securitization (to move interest rate risk off bank balance sheets while reducing capital requirements) that would later be behind the global financial crash of 2007 (published as Minsky, 2008) At the same time, Minsky continually formulated and advocated policy to deal with these new developments. Unfortunately, his warnings were largely ignored by the profession and by policymakers – until it was too late.
Minsky’s theory of the business cycle

In the introduction I focused on long-term transformations because too often Minsky’s analysis is interpreted as a theory of the business cycle. There have even been some analyses that attempted to ‘prove’ Minsky wrong by applying his theory to data from one business cycle. Further, the global crisis that began in 2007 has been called the ‘Minsky moment’ or a ‘Minsky crisis’. As I will discuss, I agree that this crisis does fit with Minsky’s theory, but I object to analyses that begin with, say, 2004 – attributing the causes of the crisis to changes that occurred over a handful of years that preceded the collapse. Rather, I argue that we should find the causes of the crisis in the transformation that began in 1951. We will not understand the crisis if we begin with a US real estate boom fueled by lending to subprime borrowers. That will be the topic of the next section.
Now, Minsky did have a theory of the business cycle (see Papadimitriou and Wray (1998) for a summary of Minsky’s approach). He called it ‘an investment theory of the cycle and a financial theory of investment’. He borrowed the first part of that from Keynes: investment is unstable and tends to be the driver of the cycle (through its multiplier impact). Minsky’s contribution was the financial theory of investment, with his book John Maynard Keynes (1975) providing the detailed exposition. In brief, investment is financed with a combination of internal and external (borrowed) funds. Over an expansion, success generates a greater willingness to borrow, which commits a rising portion of expected gross profits (Minsky called it gross capital income) to servicing debt. This exposes the firm to greater risk because if income flows turn out to be less than expected, or if finance costs rise, firms might not be able to meet those debt payment commitments. There is nothing inevitable about that, however, because Minsky incorporated the profits equation of Michal Kalecki in his analysis: at the aggregate level total profits equal investment plus the government’s deficit plus net exports plus consumption out of profits and less saving out of wages (Minsky, 1986). The important point is that all else being equal, higher investment generates higher profits at the aggregate level. This can actually make the system even more unstable, because if profits continually exceed expectations, making it easy to service debt, then firms will borrow even more.
This then leads to Minsky’s famous categorization of financial positions: a hedge unit can meet payment commitments out of income flow; a speculative unit can only pay interest but must roll over principal; and a Ponzi unit cannot even make the interest payments so must ‘capitalize’ them (borrowing to pay interest). (In his classification of ‘Ponzi finance’, Minsky borrowed the name of a famous fraudster, Charles Ponzi, who ran a ‘pyramid’ scheme – in more recent times, Bernie Madoff ran another pyramid that failed spectacularly.) Over a ‘run of good times’, firms (and households) are encouraged to move from hedge to speculative finance, and the economy as a whole transitions from one in which hedge finance dominates to one with a greater weight of speculative finance. Eventually some important units find they cannot pay interest, driving them to Ponzi finance. Honest bankers do not like to lend to Ponzi units because their outstanding debt grows continually unless income flows eventually rise. When the bank stops lending, the Ponzi unit collapses. Following Irving Fisher, Minsky then described a ‘debt deflation’ process: collapse by one borrower can bring down his creditors, who default on their own debts, generating a snowball of defaults. Uncertainty and pessimism rise, investment collapses and through the multiplier income and consumption also fall, and we are on our way to a recession.
But Minsky did not mean to imply that all financial crises lead to recessions, nor that all recessions result from the transition to speculative and Ponzi finance. The Federal government in the post-war period was big – 20–25% of the economy versus only 3% on the verge of the Great Depression. This meant that government itself could be both stabilizing and destabilizing. Countercyclical movement of its budget from surplus in a boom to deficit in a slump would stabilize income and profits (recall from the Kalecki accounting identity above that government deficits add to profits). A rising deficit could potentially offset the effects of falling investment, and, indeed, over the post-war period that helped to cushion every recession. However, it is also possible for the government to cause a downturn by cutting spending – as it did in the demobilization from the Second World War. And if the budget is excessively biased toward surplus when the economy grows, it will generate ‘fiscal drag’ that removes household income and profits of firms – causing a recession. For that reason, a recession could occur well before the private sector is dominated by speculative and Ponzi positions. (Note that an economy that moves toward current account deficits when it grows robustly – such as the USA – will suffer an additional ‘headwind’ that sucks income and profits from domestic households and firms.)
In addition to the ‘big government’, the post-war period also had what Minsky called the ‘big bank’ – the Federal Reserve. The Fed plays a number of roles: it sets interest rates, it regulates and supervises banks, and it acts as lender of last resort. Generally, it moves interest rates in a procyclical manner (raising them in expansion and lowering them in recession), which is believed by many orthodox economists to be stabilizing. Like many heterodox economists, Minsky doubted that spending is very interest-sensitive: in a boom, raising rates by a moderate amount will not curb enthusiasm, and in a bust, even very low interest rates cannot overcome pessimism. In addition, Minsky emphasized the impact of interest rates on financial fragility: raising rates in a boom would increase finance costs and hasten the transition to speculative and Ponzi financial positions, hence, to the extent that tight monetary policy ‘works’, it does so by inducing a financial crisis. Thus, Minsky rejected the notion that the Fed can use interest rates to ‘fine tune’ the economy.
But lender of last resort policy was viewed by Minsky as essential – it would stop a bank run and would help to put a floor to asset prices, attenuating the debt deflation process discussed above. If the Fed lends to a troubled financial institution, it does not have to sell assets to try to cover demands by creditors for redemption. For example, if depositors are demanding cash withdrawal, in the absence of a lender of last resort the bank would have to sell assets to raise the cash required; this is normally difficult for assets such as loans, and nearly impossible to do in a crisis. So the Fed lends the reserves to cover withdrawals.
In sum, the intervention of the big bank and the big government helps to prevent a financial crisis from turning into a deep downturn. The big government’s deficit puts a floor to falling income and profits, and the big bank’s lending relieves pressure in financial markets (Minsky, 1986). A financial crisis can even occur without setting off a recession – a good example was the 1987 stock market crash, in which the Fed quickly intervened with the promise that it would lend reserves to market participants to stop necessitous selling of stocks to cover positions. No recession followed the crash – unlike the October 1929 crash, in which margin calls forced sales of stocks. And the big government deficits kept profits flowing in 1987, again unlike 1929 when the government’s budget was far too small to make up for collapsing investment.
Unfortunately, most Fed policy over the post-war period involved reducing regulation and supervision, promoting the natural transition to financial fragility. From Minsky’s perspective, this was a dangerous combination. While the big bank and the big government reduced the fall-out of crisis, the move to ‘self-regulation’ by financial institutions and markets made riskier behaviour possible. As the fear of failure was attenuated by a government safety net, perceived risk was lowered. Chairman Ben Bernanke (2004) proclaimed the onset of ‘the great moderation’ – a new era of stability. As Minsky argued, though, ‘stability is destabilizing’. In his view, if the government is going to provide a safety net to prop up and ‘validate’ risky behaviour, then the other side of the coin must be greater oversight and regulation, not less. With rapid financial innovation, reduced regulatory oversight, and less fear of a debt deflation process, financial fragility would build until a collapse.
Money manager capitalism and the crisis

Beginning in 2007, the world faced the worst economic crisis since the 1930s. References to Keynesian theory and policy became commonplace, with only truly committed free marketeers arguing against massive government spending to cushion the collapse and re-regulation to prevent future crises. All sorts of explanations were proffered for the causes of the crisis: lax regulation and oversight, rising inequality that encouraged households to borrow to support spending, greed and irrational exuberance, and excessive global liquidity – spurred by easy money policy in the USA and by US current account deficits that flooded the world with too many dollars. While each of these explanations does capture some aspect of the crisis, none of them fully recognizes the systemic nature of the global crisis.
Unfortunately, Minsky died in 1996, but after the crash, his work enjoyed unprecedented interest, with many calling this the ‘Minsky Moment’ or ‘Minsky Crisis’. (Cassidy, 2008; Chancellor, 2007; McCulley, 2007; Whalen 2007) I argued above that we should not view this as a ‘moment’ that can be traced to recent developments. Rather, as Minsky had been arguing for nearly fifty years, what we have seen is a slow transformation of the global financial system toward what Minsky called ‘money manager capitalism’ that finally collapsed in 2007. Hence I call it the ‘Minsky half-century’ (Wray, 2009).
It is essential to recognize that we have had a long series of crises in the USA and abroad, and the trend has been toward more severe and more frequent crises: muni bonds in the mid-1960s; real estate investment trusts in the early 1970s; developing country debt in the early 1980s; commercial real estate, junk bonds and the thrift crisis in the USA (with banking crises in many other nations) in the 1980s; stock market crashes in 1987 and again in 2000 with the dot-com bust; the Japanese meltdown from the early 1980s; Long Term Capital Management, the Russian default and Asian debt crises in the late 1990s; and so on. Until the current crisis, each of these was resolved (some more painfully than others – impacts were particularly severe and long-lasting in the developing world) with some combination of central bank or international institution (IMF, World Bank) intervention plus a fiscal rescue (often taking the form of US Treasury spending of last resort to prop up the US economy to maintain imports that helped to restore rest of world growth).
According to Minsky, the problem is money manager capitalism – the economic system characterized by highly leveraged funds seeking maximum returns in an environment that systematically under-prices risk (Wray, 2009). There are a number of reasons for this. For example, there was the belief in the Greenspan ‘put’ (the Chairman would always intervene to bail out financial markets if problems developed) and the Bernanke ‘great moderation’ – both of which lowered perceived risk. Since the last depression and debt deflation had occurred so long ago, few market participants had any memory of it; indeed, many of those in markets did not even remember the savings and loan crisis of the 1980s! Many of the models that were used to price assets were based on a very short time horizon (five years or less; sometimes this was necessitated by the fact that the financial instruments did not exist previous to that), a period that was unusually quiescent. Further, the rise of ‘shadow banks’ (financial institutions that often had lower costs and less regulation) led to a competitive reduction of risk spreads (pushing interest rates on riskier assets down relative to those on safe assets). Credit ratings agencies played an important role, providing high ratings to assets that proved to be very much riskier than indicated. All of this was made worse by a general ‘euphoric’ belief that prices of assets (such as real estate and commodities) could only go up. Finally, there was an explosion of various types of derivatives that appeared to reduce risk by shifting it to institutions better able to absorb losses. Perhaps the best example was the use of credit default swaps that were used as insurance in case of default; but when the crisis began, it turned out that all the risk came back in the form of counterparty risk (AIG, the seller of the ‘insurance’, could not cover the losses). While we cannot go into all the details here, it was even worse than that because credit default swaps were also used as pure bets on failure (the bettor would win if the assets went bad), and prices of these instruments were used as indicators of the probability of default (rising credit default swap prices could induce credit raters to lower ratings, which then triggered pay-offs on the bets even as they raised borrowing costs for the debtors) (see Wray, 2009).
In sum, contrary to efficient markets theory, markets generate perverse incentives for excess risk, punishing the timid with low returns (Cassidy, 2009). Any money manager who tried to swim against the stream by avoiding excessive leverage and complex and hard-to-value assets found it hard to retain clients. Those playing along were rewarded with high returns because highly leveraged funding drives up prices for the underlying assets – whether they are dot-com stocks, Las Vegas homes, or corn futures. It all works – until it doesn’t. We now know from internal emails that many financial market participants knew that risk was under-priced, but adopted an ‘I’ll be gone, you’ll be gone’ strategy – take the risk, get the millions of dollars in compensation now, and retire when the whole thing collapses.
Many have accurately described the phenomenon as ‘financialization’ – growing debt that leverages income flows and wealth. At the 2007 peak, total debt in the US reached a record 5 times GDP (versus 3 times GDP in 1929), with most of that private debt of households and firms. From 1996 until 2007 the US private sector spent more than its income (running deficits that increased debt) every year except during the recession that followed the dot-com bust in 2000. Financial institution debt also grew spectacularly over the two decades preceding the crisis, totaling more than GDP. Exotic financial instruments exploded – outstanding credit default swaps (bets on default by households, firms, and even countries) reached over $60 trillion, and total financial derivatives (including interest rate swaps, and exchange rate swaps) reached perhaps $600 trillion – many times world GDP.
Some accounts blame subprime mortgages (home loans made to riskier borrowers, typically low income households) for the global financial collapse – but that is too simple. The total value of riskier mortgage loans made in the USA during the real estate boom could not have totalled more than a trillion or two dollars (big numbers but small relative to the total volume of financial instruments). The USA was not the only country that experienced a speculative boom in real estate – Ireland, Spain and some countries in eastern Europe also had them. Then there was also speculation in commodities markets –leading to the biggest boom in history, followed by the inevitable crash – that involved about a half trillion dollars of managed money (mostly US pension funds) placing bets in commodities futures markets (Wray, 2008). Global stock markets also enjoyed a renewed speculative hysteria. Big banks like Goldman Sachs speculated against US state governments, as well as countries like Greece. (For example, Goldman Sachs encouraged clients to bet against the debt issued by at least 11 US states – while collecting fees from those states for helping them to place debt. A common technique was to pool risky debt into securities, sell these to investors, then ‘short’ the securities using credit default swaps to bet on failure. The demand for CDSs for shorting purposes would lead to credit downgrades that raised finance costs and hastened default. The most famous shorter of mortgage debt is John Paulson, whose hedge fund asked Goldman Sachs to create toxic synthetic collateralized debt obligations (CDOs) that it could bet against. According to the US Securities and Exchange Commission, Goldman allowed Paulson’s firm to increase the probability of success by picking particularly risky MBSs to include in the CDOs. Goldman arranged a total of 25 such deals, named Abacus, totaling about $11 billion. Out of 500 CDOs analyzed by UBS, only two did worse than Goldman’s Abacus. Just how toxic were these CDOs? Only five months after creating one of these Abacus CDOs, the ratings of 84% of the underlying mortgages had been downgraded. By betting against them, Goldman and Paulson won – Paulson pocketed $1 billion on the Abacus deals (he made a total of $5.7 billion shorting mortgage-based instruments in a span of two years) and Goldman earned fees for arranging the deals. According to the SEC Goldman’s customers actually met with Paulson as the deals were assembled – but Goldman never informed them that Paulson was the shorter of the CDOs they were buying!)
On top of all this speculative fervor there was also fraud – which appears to have become normal business practice in all of the big financial institutions. It will be years, perhaps decades, before we will unravel all of the contributing factors, including the financial instruments and practices as well as the questionable activities by market players and government officials that led to the collapse. (The Final Report of the National Commission on the Causes of the Financial and Economic Crisis in the United States (commissioned by the US Congress and President Obama) concluded that the crisis was both foreseeable and preventable. It blamed the ‘captains of finance’ (heads of the biggest banks) and the ‘public stewards’ (officials charged with regulating the banks) for the systemic breakdown in accountability and ethics that led to the crisis. Former bank regulator William Black (who blew the whistle on Charles Keating, the convicted felon who ran Lincoln Savings, the biggest thrift to fail as a result of the 1980s crisis, and the patron of five US Senators known as the ‘Keating Five’) is more blunt: the biggest banks in America were run as ‘control frauds’ designed to enrich top management while defrauding customers and shareholders. By his reckoning, thousands of individuals committed go-to-jail fraud. Only time will tell whether they will be brought to justice.)
This much we do know: the entire financial system had evolved in a manner that made ‘it’ – an economic collapse and debt deflation – possible. Riskier practices had been permitted by regulators, and encouraged by rewards and incentives. Lack of oversight and prosecution led to a dramatic failure of corporate governance and risk management at most big institutions (see the Final Report of the National Commission on the Causes of the Financial and Economic Crisis in the United States). The combination of big government and big bank interventions plus bail-outs of ‘too big to fail’ institutions in crisis after crisis since the 1960s let risk grow on trend. The absence of depressions allowed financial wealth to grow over the entire post-war period – including personal savings and pension funds. All of these funds needed to earn returns. As a result, the financial sector grew relative to GDP – as a percentage of value added, it grew from 10% to 20%, and its share of corporate profits quadrupled from about 10% to 40% from 1960 to 2007 (Nersisyan and Wray, 2010). It simply became too large relative to the size of the economy’s production and income. The crash was the market’s attempt to downsize finance – just as the crash in 1929 permanently reduced the role played by finance, and allowed for the robust growth of the post-war period. Beginning in summer 2007, a series of runs on financial institutions began that would have snowballed without unprecedented intervention by governments around the world. Typically these took the form of a refusal by markets to ‘refinance’ banks. Recall from above that debt of financial institutions had grown tremendously, as they borrowed mostly short-term to finance positions in financial assets. Often this took the form of overnight borrowing plus very short-term commercial paper on the basis of high-quality collateral. As the crisis unfolded, borrowers had to pledge more and more collateral, and pay higher and higher interest rates to borrow. By fall of 2007, the ‘haircut’ (a 10% haircut means the bank can borrow 90 cents against each dollar of good collateral) was so large that many financial institutions could no longer borrow enough to finance their positions in assets – meaning they had to sell assets into a market that now feared risk. Such ‘fire sales’ would lead to what Irving Fisher and Minsky called a ‘debt deflation’. At the same time, worried shareholders began to dump bank stocks. Without prompt rescue by governments, the ‘market’ would have operated in a manner that would have led to failure of most institutions. US Treasury Secretary Timothy Geithner later said that ‘none of [the biggest banks] would have survived a situation in which we had let that fire try to burn itself out’ and Fed Chairman Ben Bernanke said ‘As a scholar of the Great Depression, I honestly believe that September and October of 2008 was the worst financial crisis in global history… out of maybe the 13, 13 of the most important financial institutions in the United States, 12 were at risk of failure within a period of a week or two’ (Final Report of the National Commission on the Causes of the Financial and Economic Crisis in the United States, p. 354).
It is important to include as contributing factors the erosion of New Deal institutions that had enhanced economic stability, including most importantly the creation of a high-consumption, high-employment and high-wage society. As Minsky (1986, 1996) argued, the USA emerged from the Second World War with powerful labour unions that were able to obtain good and growing wages, which fueled growth of domestic consumption out of income. According to Minsky, debt loads were extremely low in the private sector – with debts having been paid down or wiped out by bankruptcy in the Great Depression – and with lots of safe government bonds held as assets. In combination with a strengthened government safety net (Social Security for the aged, welfare and unemployment compensation for those without jobs, the GI bill for soldiers returning home, low interest rate loans for students) this meant that consumption comprised a relatively larger part of GDP. For Minsky, consumption out of income is a very stable component – unlike investment, which is unstable. Minsky argued that investment-led growth is more unstable than growth led by a combination of consumption out of income plus government spending because the second model does not lead to worsening private sector balance sheets.
However, over the course of the past four decades, union power declined. Minsky frequently claimed that the most significant action taken during the Reagan administration was the busting of the air traffic controllers’ union (which, he claimed, sent a message to all of labour). Median real wages stopped growing, consumer debt grew on trend (and then exploded after 1995), and the generosity of the safety net was reduced. Further, over the whole period, policy increasingly favoured investment and saving over consumption – with favourable tax treatment of savings and investment, and with public subsidies of business investment. Federal government also stopped growing (relative to the size of the economy) and its spending shifted away from public infrastructure investment. Inequality grew on trend, so that it actually surpassed the 1929 record inequality. President Bush even celebrated the creation of the ‘ownership society’ – ironically, with concentration of ownership of financial assets at the very top (Wray, 2005). The only asset that was widely owned was the home, which then became the basis for a speculative real estate bubble that produced financial assets traded around the world. The global financial collapse and deep recession in the USA after 2007 then generated widespread foreclosures (13 million by 2012) – with families kicked out of their homes, owing lots of debt, and with real estate prices collapsing so that vulture hedge funds could buy up blocks of houses at pennies on the dollar. By 2010 the home ownership rate in the USA had returned to the pre-boom level.
The 1929 crash ended what Minsky and Rudolf Hilferding designated the finance capitalism stage (Wray, 2009) Perhaps the global financial crisis of 2007 will prove to be the end of this stage of capitalism – the money manager phase. Of course, it is too early to even speculate on the form capitalism will take in the future. In the final section I will look at the policy response that could help to reformulate global capitalism along Minskian lines.
Minskian policy in the aftermath of the collapse of money manager capitalism

Minsky (1986) argued that the Great Depression represented a failure of the small-government, laissez faire economic model, while the New Deal promoted a Big Government/Big Bank highly successful model for financial capitalism. Following Minsky, we might say that the current crisis represents a failure of the Big Government/Neoconservative (or, outside the USA, what is called neo-liberal) model that promotes deregulation, reduced supervision and oversight, privatization, and consolidation of market power. It replaced the New Deal reforms with self-supervision of markets, with greater reliance on ‘personal responsibility’ as safety nets were reduced, and with monetary and fiscal policy that is biased against maintenance of full employment and adequate growth to generate rising living standards for most Americans. Even before the crisis, the USA faced record inequality, a healthcare crisis, and high rates of incarceration, among other problems facing the lower and middle classes (Wray 2000, 2005). All of these trends are important as they increase insecurity and the potential for instability, as Minsky described in one of his last published pieces (Minsky 1996).
We must return to a more sensible model, with enhanced oversight of financial institutions and with a financial structure that promotes stability rather than speculation. We need policy that promotes rising wages for the bottom half so that borrowing is less necessary to achieve middle class living standards. We need policy that promotes employment, rather than transfer payments – or worse, incarceration – for those left behind. Monetary policy must be turned away from using rate hikes to pre-empt inflation and toward a proper role: stabilizing interest rates, direct credit controls on bank lending to prevent runaway speculation, and stronger bank supervision. (A central bank could, for example, increase margin requirements on lending to speculators, raise required down payments for bank real estate lending, and set limits on bank lending for specified purposes in a euphoric boom.)
Minsky insisted that ‘the creation of new economic institutions which constrain the impact of uncertainty is necessary’, arguing that the ‘aim of policy is to assure that the economic prerequisites for sustaining the civil and civilized standards of an open liberal society exist. If amplified uncertainty and extremes in income maldistribution and social inequalities attenuate the economic underpinnings of democracy, then the market behavior that creates these conditions has to be constrained’ (Minsky, 1996, pp. 14, 15). It is time to take finance back from the clutches of Wall Street’s casino.
Minsky had long called for an ‘employer of last resort’ program to provide jobs to those unable to find them in the private sector. In a sense this would be a counterpart to the central bank’s ‘lender of last resort’ program. In the jobs program, government would offer a perfectly elastic supply of jobs at a basic program wage. Anyone willing to work at that wage would be guaranteed a job. Workers would be ‘taken as they are’ – whatever their level of education or training – and jobs would be designed for their skill level. Training would be a part of every job – to improve skills and to make workers more employable outside the program. The work would provide useful services and public infrastructure, improving living standards. While Minsky is best known for his work on financial instability, his proposal for the employer of last resort program received almost as much of his attention, especially in the 1960s and 1970s. Interested readers are referred to the growing body of work on use of job guarantee programs as part of long-term development strategy (Bhaduri, 2005; Felipe et al., 2009; Hirway, 2006; Minsky, 1965; Mitchell and Wray, 2005; Tcherneva and Wray, 2007; Wray, 2007). Note that this would help to achieve Minsky’s goal of a high-employment economy with decent wages to finance consumption. Minsky always saw the job guarantee as a stabilizing force – and not something that is desirable only for humanitarian reasons.
The global crisis offers both grave risks as well as opportunities. Global employment and output collapsed faster than at any time since the Great Depression. Hunger and violence grew after the financial crisis – even in developed nations. The 1930s offer examples of possible responses – on the one hand, nationalism and repression (Nazi Germany), on the other a New Deal and progressive policy. From a Minskian perspective, finance played an outsized role in the run-up to the crisis, both in the developed nations, where policy promoted managed money, and in the developing nations, which were encouraged to open to international capital. Households and firms in developed nations were buried under mountains of debt even as incomes for wage earners stagnated. Developing nations were similarly swamped with external debt service commitments, while the promised benefits of Neoliberal policies often never arrived.
Minsky would probably argue that it is time to put global finance back in its proper place as a tool to achieving sustainable development, much as the USA did in the aftermath of the Great Depression. This means substantial downsizing and careful re-regulation. Government must play a bigger role, which in turn requires a new economic paradigm that recognizes the possibility of simultaneously achieving social justice, full employment, and price and currency stability through appropriate policy.
See Also

banking crisis;
credit crunch chronology;
European Central Bank and monetary policy in the Euro area;
euro zone crisis 2010

Bernanke, B. S. 2004. The Great Moderation . Speech given at the meetings of the Eastern Economics Association, Washington DC, 20 February. Available at http://www.federalreserve.gov/Boarddocs/Speeches/2004/20040220/default/htm. Accessed 12 May 2009.

Bhaduri, A. 2005. Development With Dignity: a Case for Full Employment . National Book Trust, India.

Cassidy, J. 2008. The Minsky moment. The New Yorker , 4 February. http://www.newyorker.com/. Accessed 29 January 2008.

Cassidy, J. 2009. How Markets Fail: the Logic of Economic Calamities . Picador, New York.

Chancellor, E. 2007. Ponzi Nation. Institutional Investor , 7 February.

Felipe, J., Mitchell, W. and Wray, L. R. 2009. A reinterpretation of Pakistan’s ‘economic crisis’ and options for policymakers. Manuscript, Asian Development Bank.

Hirway, I. 2006. Enhancing livelihood security through the National Employment Guarantee Act: toward effective implementation of the Act. The Levy Economics Institute Working Paper No. 437 . http://www.levy.org/.

McCulley, P. 2007. The plankton theory meets Minsky. Global Central Bank Focus , March. PIMCO Bonds: http://www.pimco.com/LeftNav/Featured+Market+Commentary/FF/1999-2001/FF_01_2001.htm. Accessed 8 March 2007.

Minsky, H. P. 1957. Central banking and money market changes. Quarterly Journal of Economics , 71 (2), 171.

Minsky, H. P. 1965. The role of employment policy. In: Poverty in America (ed. M. S. Gordon ). Chandler Publishing Company, San Francisco, CA.

Minsky, H. P. 1975. John Maynard Keynes . Columbia University Press, New York.

Minsky, H. P. 1982. Can it Happen Again? M. E. Sharpe , Armonk, NY.

Minsky, H. P. 1986. Stabilizing an Unstable Economy . Yale University Press, New Haven and London.

Minsky, H. P. 1996. Uncertainty and the Institutional Structure of Capitalist Economies . The Levy Economics Institute of Bard College, Working Paper No. 155.

Minsky, H. P. 2008 (1987). Securitization . Levy Economics Institute of Bard College, Policy Note No. 2, 12 May.

Mitchell, W. F. and Wray, L. R. 2005. In defense of employer of last resort: a response to Malcolm Sawyer. Journal of Economic Issues , 39 (1), 235–245.

Nersisyan, Y. and Wray, L. R. 2010. Transformation of the financial system: financialization, concentration, and the shift to shadow banking. In Minsky, Crisis and Development (eds. D. Tavasci and J. Toporowski ), pp. 32–49. Palgrave Macmillan, Basingstoke.

Papadimitriou, D. B. and Wray, L. R. 1998. The economic contributions of Hyman Minsky: varieties of capitalism and institutional reform. Review of Political Economy , 10 (2), 199–225.

Tcherneva, P. R. and Wray, L. R. 2007. Public employment and women: the impact of Argentina’s Jefes Program on female heads of poor households. Levy Economics Institute Working Paper No. 519. http://www.levyinstitute.org/publications/?docid=965.

Whalen, C. 2007. The U.S. credit crunch of 2007: a Minsky moment. Levy Economics Institute Public Policy Brief , No. 92. http://www.levy.org/.

Wray, L. R. 2000. A new economic reality: penal Keynesianism. Challenge , September–October, 31–59.

Wray, L. R. 2005. The ownership society: social security is only the beginning. Levy Economics Institute Public Policy Brief , No. 82. http://www.levy.org/.

Wray, L. R. 2007. The employer of last resort programme: could it work for developing countries? Economic and Labour Market Papers , 2007/5, International Labour Office, Geneva.

Wray, L. R. 2008. The commodities market bubble: money manager capitalism and the financialization of commodities. Levy Economics Institute Public Policy Brief , No. 96. http://www.levy.org/.

Wray, L. R. 2009. The rise and fall of money manager capitalism: a Minskian approach. Cambridge Journal of Economics , 33 (4), 807–828.

How to cite this article

Randall Wray, L. "Minsky crisis." The New Palgrave Dictionary of Economics. Online Edition. Eds. Steven N. Durlauf and Lawrence E. Blume. Palgrave Macmillan, 2011. The New Palgrave Dictionary of Economics Online. Palgrave Macmillan. 23 July 2011 doi:10.1057/9780230226203.3852

2011 m. balandžio 18 d., pirmadienis

Socialinis mobilumas

Rich are getting richer? Exactly right
Article by: JAY COGGINS Updated: March 29, 2011 - 2:32 PM

There's a noisy band of American inequality deniers who are trying to convince us otherwise.


Steven Cunningham ("The rich are getting richer -- right?" March 25) tells us that the rich in America aren't getting richer.

To paraphrase Artemus Ward, a 19th-century humorist: It ain't so much the things he don't know that get him into trouble. It's the things he does know that just ain't so.

Economists, a famously contentious bunch, disagree about many things. On the question of economic inequality, though, they disagree hardly at all: American inequality is high and rising.

Economists use three main tools to study inequality. They measure poverty.

They compute the Gini coefficient. And they compare the income or wealth of the rich (or the very rich) to that of the rest of us.

On all of these counts the U.S. record since 1970 is grim for all but those at the top.

The Census Bureau's 2009 poverty threshold for a family of two adults with two children was $21,756; for a single adult aged less than 65, it was $11,161.

The poverty rate, giving the percentage of Americans living below this threshold, varies over time as the economy waxes and wanes.

Lately it's been rising. In 2009, 43 million Americans, one of every seven (14.3 percent), lived in poverty. That's up from 25.5 million (12.6 percent) in 1970.

The Gini coefficient measures inequality for all of us, not just the poor. It can be zero (if income is distributed equally); it can be 100 (if, impossibly, a single family captures the entire national income), or anything in between.

A higher Gini means more inequality.

The Census Bureau tells us the U.S. Gini has risen from 39.4 in 1970 to 46.2 in 2000, and to 46.8 in 2009.

Government programs and taxes can and do reduce inequality, though. After accounting for their effects, the U.S. Gini coefficient falls to 38.

How does this compare to Ginis for other rich countries? We take the top prize. Our 38 leaves us tied with Portugal atop the rankings of the richest countries. American exceptionalism indeed.

Not since the Roaring Twenties have the richest in America had it so good.

Economists Thomas Piketty and Emmanuel Saez have calculated the share of U.S. income going to the top 1 percent of American households.

The share was a lofty 18.9 percent in 2007, more than double the 8.3 percent from 1970. The 2007 number was last surpassed in 1928, when the share reached 19.6 percent.

And the other rich countries? The most recent numbers for Germany and Japan, for example, are 8.9 percent and 9.2 percent. We win again, going away.

Inequality of income is high, but inequality of wealth is much higher still. Those on Forbes magazine's 2010 list of the 400 richest Americans, headed by Bill Gates with a net worth of $54 billion, together own wealth totaling $1.27 trillion.

Compare that with the total net worth of the bottom 50 percent of households: $1.61 trillion as of 2007, the most recent number.

That's right. The 400 richest people in the country are worth nearly as much as the poorest 57 million households.

And the Gini coefficient for household wealth, as opposed to income? An eye-popping 86.5.

Income mobility, on which Cunningham dwells at some length, is different than inequality, but related in an important way.

High mobility, if true, sums up the American dream and lightens the burden of inequality. It means that the poor have a good chance to climb the economic ladder.

Economists measure mobility in different ways. Some compare all families at two different moments, say 10 years apart; they find that mobility appears to be relatively high.

This is the approach taken in the government reports cited by Cunningham.

It has a serious weakness. Using it, much of what appears to be mobility is just college students beginning their careers or older workers retiring.

That's not income mobility. It's the normal cycle of economic life.

One can, more usefully, compare families only to others in the same age cohort over time. By this measure mobility is much lower, and it's hardly budged in two generations.

In a recent study, economist Wojciech Kopcuz of Columbia and his coauthors estimated mobility by looking within cohorts.

They found that only about one person in 30 can expect to move from the bottom 40 percent of the income scale into the top 20 percent within 10 years. Mobility is not high, and it is not rising.

There exists a determined and noisy band of American inequality deniers, to which Cunningham evidently belongs. We all need to on guard against believing the things they know that just ain't so.

Jay Coggins is an applied economics professor at the University of Minnesota.

2011 m. kovo 27 d., sekmadienis

2011 m. kovo 14 d., pirmadienis

Vilniaus transportas (kai KNA eina iš paskos:)





2011 m. vasario 21 d., pirmadienis

2011 m. vasario 7 d., pirmadienis

2011 m. vasario 1 d., antradienis

Obamos prioritetai be finansavimo

America’s Ungovernable Budget
Jeffrey D. Sachs
2011-01-31, www.proect-syndicate.org

NEW YORK – The heart of any government is found in its budget. Politicians can make endless promises, but if the budget doesn’t add up, politics is little more than mere words.

The United States is now caught in such a bind. In his recent State of the Union address, President Barack Obama painted a convincing picture of modern, twenty-first-century government. His Republican Party opponents complained that Obama’s proposals would bust the budget. But the truth is that both parties are hiding from the reality: without more taxes, a modern, competitive US economy is not possible.

Obama rightly emphasized that competitiveness in the world today depends on an educated workforce and modern infrastructure. That is true for any country, but it is especially relevant for rich countries. The US and Europe are in direct competition with Brazil, China, India, and other emerging economies, where wage levels are sometimes one-quarter those in high-income countries (if not even lower). America and Europe will keep their high living standards only by basing their competitiveness on advanced skills, cutting-edge technologies, and modern infrastructure.

That is why Obama called for an increase in US public investment in three areas: education, science and technology, and infrastructure (including broadband Internet, fast rail, and clean energy). He spelled out a vision of future growth in which public and private investment would be complementary, mutually supportive pillars.

Obama emphasized these themes for good reason. Unemployment in the US now stands at nearly 10% of the labor force, in part because more new jobs are being created in the emerging economies, and many of the jobs now being created in the US pay less than in the past, owing to greater global competition. Unless the US steps up its investment in education, science, technology, and infrastructure, these adverse trends will continue.

But Obama’s message lost touch with reality when he turned his attention to the budget deficit. Acknowledging that recent fiscal policies had put the US on an unsustainable trajectory of rising public debt, Obama said that moving towards budget balance was now essential for fiscal stability. So he called for a five-year freeze on what the US government calls “discretionary” civilian spending.

The problem is that more than half of such spending is on education, science and technology, and infrastructure – the areas that Obama had just argued should be strengthened. After telling Americans how important government investment is for modern growth, he promised to freeze that spending for the next five years!

Politicians often change their message from one speech to the next, but rarely contradict it so glaringly in the same speech. That contradiction highlights the sad and self-defeating nature of US budget policies over the past 25 years, and most likely in the years to come. On the one hand, the US government must invest more to promote economic competitiveness. On the other hand, US taxes are chronically too low to support the level of government investment that is needed.

America’s fiscal reality was made painfully clear two days after Obama’s speech, in a new study from the Congressional Budget Office, which revealed that the budget deficit this year will reach nearly $1.5 trillion – a sum almost unimaginable even for an economy the size of the US. At nearly 10% of GDP, the deficit is resulting in a mountain of debt that threatens America’s future.

The CBO study also made clear that December’s tax-cut agreement between Obama and the Republican opposition willfully and deliberately increased the budget deficit sharply. Various tax cuts initiated by George W. Bush were set to expire at the end of 2010. Obama and the Republicans agreed to continue those tax cuts for at least two years (they will now probably continue beyond that), thereby lowering tax revenue by $350 billion this year and again in 2012. Tax cuts for the richest Americans were part of the package.

The truth of US politics today is simple. The key policy for the leaders of both political parties is tax cuts, especially for the rich. Both political parties, and the White House, would rather cut taxes than spend more on education, science and technology, and infrastructure. And the explanation is straightforward: the richest households fund political campaigns. Both parties therefore cater to their wishes.

As a result, America’s total tax revenues as a share of national income are among the lowest of all high-income countries, roughly 30%, compared to around 40% in Europe. But 30% of GDP is not enough to cover the needs of health, education, science and technology, social security, infrastructure, and other vital government responsibilities.

One budget area can and should be cut: military spending. But even if America’s wildly excessive military budget is cut sharply (and politicians in both parties are resisting that), there will still be a need for new taxes.

The economic and social consequences of a generation of tax cutting are clear. America is losing its international competitiveness, neglecting its poor – one in five American children is trapped in poverty – and leaving a mountain of debt to its young. For all of the Obama administration’s lofty rhetoric, his fiscal-policy proposals make no serious attempt to address these problems. To do so would require calling for higher taxes, and that – as George H. W. Bush learned in 1992 – is no way to get re-elected.

Jeffrey D. Sachs is Professor of Economics and Director of the Earth Institute at Columbia University. He is also Special Adviser to United Nations Secretary-General on the Millennium Development Goals.

2011 m. sausio 17 d., pirmadienis

Thaler apie efektyvių rinkų hipotezę

Markets can be wrong and the price is not always right
By Richard Thaler

www.ft.com, August 4 2009

I recently had the pleasure of reading Justin Fox’s new book The Myth of the Rational Market . It offers an engaging history of the research that has come to be called the “efficient market hypothesis”. It is similar in style to the classic by the late Peter Bernstein, Against the Gods. All the quotes in this column are taken from it. The book was mostly written before the financial crisis . However, it is natural to ask if the experiences over the last year should change our view of the EMH.

It helps to start with a quick review of rational finance. Modern finance began in the 1950s when many of the great economists of the second half of the 20th century began their careers. The previous generation of economists, such as John Maynard Keynes, were less formal in their writing and less tied to rationality as their underlying tool. This is no accident. As economics began to stress mathematical models, economists found that the simplest models to solve were those that assumed everyone in the economy was rational. This is similar to doing physics without bothering with the messy bits caused by friction. Modern finance followed this trend.

From the starting point of rational investors came the idea of the efficient market hypothesis, a theory first elucidated by my colleague and golfing buddy Gene Fama. The EMH has two components that I call “The Price is Right” and “No Free Lunch”. The price is right principle says asset prices will, to use Mr Fama’s words “fully reflect” available information, and thus “provide accurate signals for resource allocation”. The no free lunch principle is that market prices are impossible to predict and so it is hard for any investor to beat the market after taking risk into account.

For many years the EMH was “taken as a fact of life” by economists, as Michael Jensen, a Harvard professor, put it, but the evidence for the price is right component was always hard to assess. Some economists took the fact that prices were unpredictable to infer that prices were in fact “right”. However, as early as 1984 Robert Shiller, the economist, correctly and boldly called this “one of the most remarkable errors in the history of economic thought”. The reason this is an error is that prices can be unpredictable and still wrong; the difference between the random walk fluctuations of correct asset prices and the unpredictable wanderings of a drunk are not discernable.

Tests of this component of EMH are made difficult by what Mr Fama calls the “joint hypothesis problem”. Simply put, it is hard to reject the claim that prices are right unless you have a theory of how prices are supposed to behave. However, the joint hypothesis problem can be avoided in a few special cases. For example, stock market observers – as early as Benjamin Graham in the 1930s – noted the odd fact that the prices of closed-end mutual funds (whose funds are traded on stock exchanges rather than redeemed for cash) are often different from the value of the shares they own. This violates the basic building block of finance – the law of one price – and does not depend on any pricing model. During the technology bubble other violations of this law were observed. When 3Com, the technology company, spun off its Palm unit, only 5 per cent of the Palm shares were sold; the rest went to 3Com shareholders. Each shareholder got 1.5 shares of Palm. It does not take an economist to see that in a rational world the price of 3Com would have to be greater than 1.5 times the share of Palm, but for months this simple bit of arithmetic was violated. The stock market put a negative value on the shares of 3Com, less its interest in Palm. Really.

Compared to the price is right component, the no free lunch aspect of the EMH has fared better. Mr Jensen’s doctoral thesis published in 1968 set the right tone when he found that, as a group, mutual fund managers could not outperform the market. There have been dozens of studies since then, but the basic conclusion is the same. Although there are some anomalies, the market seems hard to beat. That does not prevent people from trying. For years people predicted fees paid to money managers would fall as investors switched to index funds or cheaper passive strategies, but instead assets were directed to hedge funds that charge very high fees.

Now, a year into the crisis, where has it left the advocates of the EMH? First, some good news. If anything, our respect for the no free lunch component should have risen. The reason is related to the joint hypothesis problem. Many investment strategies that seemed to be beating the market were not doing so once the true measure of risk was considered. Even Alan Greenspan, the former Federal Reserve chairman, has admitted that investors were fooled about the risks of mortgage-backed securities.

The bad news for EMH lovers is that the price is right component is in more trouble than ever. Fischer Black (of Black-Scholes fame) once defined a market as efficient if its prices were “within a factor of two of value” and he opined that by this (rather loose) definition “almost all markets are efficient almost all the time”. Sadly Black died in 1996 but had he lived to see the technology bubble and the bubbles in housing and mortgages he might have amended his standard to a factor of three. Of course, no one can prove that any of these markets were bubbles. But the price of real estate in places such as Phoenix and Las Vegas seemed like bubbles at the time. This does not mean it was possible to make money from this insight. Lunches are still not free. Shorting internet stocks or Las Vegas real estate two years before the peak was a good recipe for bankruptcy, and no one has yet found a way to predict the end of a bubble.

What lessons should we draw from this? On the free lunch component there are two. The first is that many investments have risks that are more correlated than they appear. The second is that high returns based on high leverage may be a mirage. One would think rational investors would have learnt this from the fall of Long Term Capital Management, when both problems were evident, but the lure of seemingly high returns is hard to resist. On the price is right, if we include the earlier bubble in Japanese real estate, we have now had three enormous price distortions in recent memory. They led to misallocations of resources measured in the trillions and, in the latest bubble, a global credit meltdown. If asset prices could be relied upon to always be “right”, then these bubbles would not occur. But they have, so what are we to do?

While imperfect, financial markets are still the best way to allocate capital. Even so, knowing that prices can be wrong suggests that governments could usefully adopt automatic stabilising activity, such as linking the down-payment for mortgages to a measure of real estate frothiness or ensuring that bank reserve requirements are set dynamically according to market conditions. After all, the market price is not always right.

The writer is a professor of economics and behavioural science at the University of Chicago Booth School of Business and the co-author of Nudge

2011 m. sausio 16 d., sekmadienis

2011 m. sausio 13 d., ketvirtadienis

Krugman apie eurą

Can Europe Be Saved?
By PAUL KRUGMAN, January 12, 2011, www.nytimes.com

THERE’S SOMETHING peculiarly apt about the fact that the current European crisis began in Greece. For Europe’s woes have all the aspects of a classical Greek tragedy, in which a man of noble character is undone by the fatal flaw of hubris.

Not long ago Europeans could, with considerable justification, say that the current economic crisis was actually demonstrating the advantages of their economic and social model. Like the United States, Europe suffered a severe slump in the wake of the global financial meltdown; but the human costs of that slump seemed far less in Europe than in America. In much of Europe, rules governing worker firing helped limit job loss, while strong social-welfare programs ensured that even the jobless retained their health care and received a basic income. Europe’s gross domestic product might have fallen as much as ours, but the Europeans weren’t suffering anything like the same amount of misery. And the truth is that they still aren’t.

Yet Europe is in deep crisis — because its proudest achievement, the single currency adopted by most European nations, is now in danger. More than that, it’s looking increasingly like a trap. Ireland, hailed as the Celtic Tiger not so long ago, is now struggling to avoid bankruptcy. Spain, a booming economy until recent years, now has 20 percent unemployment and faces the prospect of years of painful, grinding deflation.

The tragedy of the Euromess is that the creation of the euro was supposed to be the finest moment in a grand and noble undertaking: the generations-long effort to bring peace, democracy and shared prosperity to a once and frequently war-torn continent. But the architects of the euro, caught up in their project’s sweep and romance, chose to ignore the mundane difficulties a shared currency would predictably encounter — to ignore warnings, which were issued right from the beginning, that Europe lacked the institutions needed to make a common currency workable. Instead, they engaged in magical thinking, acting as if the nobility of their mission transcended such concerns.

The result is a tragedy not only for Europe but also for the world, for which Europe is a crucial role model. The Europeans have shown us that peace and unity can be brought to a region with a history of violence, and in the process they have created perhaps the most decent societies in human history, combining democracy and human rights with a level of individual economic security that America comes nowhere close to matching. These achievements are now in the process of being tarnished, as the European dream turns into a nightmare for all too many people. How did that happen?


It all began with coal and steel. On May 9, 1950 — a date whose anniversary is now celebrated as Europe Day — Robert Schuman, the French foreign minister, proposed that his nation and West Germany pool their coal and steel production. That may sound prosaic, but Schuman declared that it was much more than just a business deal.

For one thing, the new Coal and Steel Community would make any future war between Germany and France “not merely unthinkable, but materially impossible.” And it would be a first step on the road to a “federation of Europe,” to be achieved step by step via “concrete achievements which first create a de facto solidarity.” That is, economic measures would both serve mundane ends and promote political unity.

The Coal and Steel Community eventually evolved into a customs union within which all goods were freely traded. Then, as democracy spread within Europe, so did Europe’s unifying economic institutions. Greece, Spain and Portugal were brought in after the fall of their dictatorships; Eastern Europe after the fall of Communism.

In the 1980s and ’90s this “widening” was accompanied by “deepening,” as Europe set about removing many of the remaining obstacles to full economic integration. (Eurospeak is a distinctive dialect, sometimes hard to understand without subtitles.) Borders were opened; freedom of personal movement was guaranteed; and product, safety and food regulations were harmonized, a process immortalized by the Eurosausage episode of the TV show “Yes Minister,” in which the minister in question is told that under new European rules, the traditional British sausage no longer qualifies as a sausage and must be renamed the Emulsified High-Fat Offal Tube. (Just to be clear, this happened only on TV.)

The creation of the euro was proclaimed the logical next step in this process. Once again, economic growth would be fostered with actions that also reinforced European unity.

The advantages of a single European currency were obvious. No more need to change money when you arrived in another country; no more uncertainty on the part of importers about what a contract would actually end up costing or on the part of exporters about what promised payment would actually be worth. Meanwhile, the shared currency would strengthen the sense of European unity. What could go wrong?

The answer, unfortunately, was that currency unions have costs as well as benefits. And the case for a single European currency was much weaker than the case for a single European market — a fact that European leaders chose to ignore.


International monetary economics is, not surprisingly, an area of frequent disputes. As it happens, however, these disputes don’t line up across the usual ideological divide. The hard right often favors hard money — preferably a gold standard — but left-leaning European politicians have been enthusiastic proponents of the euro. Liberal American economists, myself included, tend to favor freely floating national currencies that leave more scope for activist economic policies — in particular, cutting interest rates and increasing the money supply to fight recessions. Yet the classic argument for flexible exchange rates was made by none other than Milton Friedman.

The case for a transnational currency is, as we’ve already seen, obvious: it makes doing business easier. Before the euro was introduced, it was really anybody’s guess how much this ultimately mattered: there were relatively few examples of countries using other nations’ currencies. For what it was worth, statistical analysis suggested that adopting a common currency had big effects on trade, which suggested in turn large economic gains. Unfortunately, this optimistic assessment hasn’t held up very well since the euro was created: the best estimates now indicate that trade among euro nations is only 10 or 15 percent larger than it would have been otherwise. That’s not a trivial number, but neither is it transformative.

Still, there are obviously benefits from a currency union. It’s just that there’s a downside, too: by giving up its own currency, a country also gives up economic flexibility.

Imagine that you’re a country that, like Spain today, recently saw wages and prices driven up by a housing boom, which then went bust. Now you need to get those costs back down. But getting wages and prices to fall is tough: nobody wants to be the first to take a pay cut, especially without some assurance that prices will come down, too. Two years of intense suffering have brought Irish wages down to some extent, although Spain and Greece have barely begun the process. It’s a nasty affair, and as we’ll see later, cutting wages when you’re awash in debt creates new problems.

If you still have your own currency, however, you wouldn’t have to go through the protracted pain of cutting wages: you could just devalue your currency — reduce its value in terms of other currencies — and you would effect a de facto wage cut.

Won’t workers reject de facto wage cuts via devaluation just as much as explicit cuts in their paychecks? Historical experience says no. In the current crisis, it took Ireland two years of severe unemployment to achieve about a 5 percent reduction in average wages. But in 1993 a devaluation of the Irish punt brought an instant 10 percent reduction in Irish wages measured in German currency.

Why the difference? Back in 1953, Milton Friedman offered an analogy: daylight saving time. It makes a lot of sense for businesses to open later during the winter months, yet it’s hard for any individual business to change its hours: if you operate from 10 to 6 when everyone else is operating 9 to 5, you’ll be out of sync. By requiring that everyone shift clocks back in the fall and forward in the spring, daylight saving time obviates this coordination problem. Similarly, Friedman argued, adjusting your currency’s value solves the coordination problem when wages and prices are out of line, sidestepping the unwillingness of workers to be the first to take pay cuts.

So while there are benefits of a common currency, there are also important potential advantages to keeping your own currency. And the terms of this trade-off depend on underlying conditions.

On one side, the benefits of a shared currency depend on how much business would be affected.

I think of this as the Iceland-Brooklyn issue. Iceland, with only 320,000 people, has its own currency — and that fact has given it valuable room for maneuver. So why isn’t Brooklyn, with roughly eight times Iceland’s population, an even better candidate for an independent currency? The answer is that Brooklyn, located as it is in the middle of metro New York rather than in the middle of the Atlantic, has an economy deeply enmeshed with those of neighboring boroughs. And Brooklyn residents would pay a large price if they had to change currencies every time they did business in Manhattan or Queens.

So countries that do a lot of business with one another may have a lot to gain from a currency union.

On the other hand, as Friedman pointed out, forming a currency union means sacrificing flexibility. How serious is this loss? That depends. Let’s consider what may at first seem like an odd comparison between two small, troubled economies.

Climate, scenery and history aside, the nation of Ireland and the state of Nevada have much in common. Both are small economies of a few million people highly dependent on selling goods and services to their neighbors. (Nevada’s neighbors are other U.S. states, Ireland’s other European nations, but the economic implications are much the same.) Both were boom economies for most of the past decade. Both had huge housing bubbles, which burst painfully. Both are now suffering roughly 14 percent unemployment. And both are members of larger currency unions: Ireland is part of the euro zone, Nevada part of the dollar zone, otherwise known as the United States of America.

But Nevada’s situation is much less desperate than Ireland’s.

First of all, the fiscal side of the crisis is less serious in Nevada. It’s true that budgets in both Ireland and Nevada have been hit extremely hard by the slump. But much of the spending Nevada residents depend on comes from federal, not state, programs. In particular, retirees who moved to Nevada for the sunshine don’t have to worry that the state’s reduced tax take will endanger their Social Security checks or their Medicare coverage. In Ireland, by contrast, both pensions and health spending are on the cutting block.

Also, Nevada, unlike Ireland, doesn’t have to worry about the cost of bank bailouts, not because the state has avoided large loan losses but because those losses, for the most part, aren’t Nevada’s problem. Thus Nevada accounts for a disproportionate share of the losses incurred by Fannie Mae and Freddie Mac, the government-sponsored mortgage companies — losses that, like Social Security and Medicare payments, will be covered by Washington, not Carson City.

And there’s one more advantage to being a U.S. state: it’s likely that Nevada’s unemployment problem will be greatly alleviated over the next few years by out-migration, so that even if the lost jobs don’t come back, there will be fewer workers chasing the jobs that remain. Ireland will, to some extent, avail itself of the same safety valve, as Irish citizens leave in search of work elsewhere and workers who came to Ireland during the boom years depart. But Americans are extremely mobile; if historical patterns are any guide, emigration will bring Nevada’s unemployment rate back in line with the U.S. average within a few years, even if job growth in Nevada continues to lag behind growth in the nation as a whole.

Over all, then, even as both Ireland and Nevada have been especially hard-luck cases within their respective currency zones, Nevada’s medium-term prospects look much better.

What does this have to do with the case for or against the euro? Well, when the single European currency was first proposed, an obvious question was whether it would work as well as the dollar does here in America. And the answer, clearly, was no — for exactly the reasons the Ireland-Nevada comparison illustrates. Europe isn’t fiscally integrated: German taxpayers don’t automatically pick up part of the tab for Greek pensions or Irish bank bailouts. And while Europeans have the legal right to move freely in search of jobs, in practice imperfect cultural integration — above all, the lack of a common language — makes workers less geographically mobile than their American counterparts.

And now you see why many American (and some British) economists have always been skeptical about the euro project. U.S.-based economists had long emphasized the importance of certain preconditions for currency union — most famously, Robert Mundell of Columbia stressed the importance of labor mobility, while Peter Kenen, my colleague at Princeton, emphasized the importance of fiscal integration. America, we know, has a currency union that works, and we know why it works: because it coincides with a nation — a nation with a big central government, a common language and a shared culture. Europe has none of these things, which from the beginning made the prospects of a single currency dubious.

These observations aren’t new: everything I’ve just said was well known by 1992, when the Maastricht Treaty set the euro project in motion. So why did the project proceed? Because the idea of the euro had gripped the imagination of European elites. Except in Britain, where Gordon Brown persuaded Tony Blair not to join, political leaders throughout Europe were caught up in the romance of the project, to such an extent that anyone who expressed skepticism was considered outside the mainstream.

Back in the ’90s, people who were present told me that staff members at the European Commission were initially instructed to prepare reports on the costs and benefits of a single currency — but that after their superiors got a look at some preliminary work, those instructions were altered: they were told to prepare reports just on the benefits. To be fair, when I’ve told that story to others who were senior officials at the time, they’ve disputed that — but whoever’s version is right, the fact that some people were making such a claim captures the spirit of the time.

The euro, then, would proceed. And for a while, everything seemed to go well.


The euro officially came into existence on Jan. 1, 1999. At first it was a virtual currency: bank accounts and electronic transfers were denominated in euros, but people still had francs, marks and lira (now considered denominations of the euro) in their wallets. Three years later, the final transition was made, and the euro became Europe’s money.

The transition was smooth: A.T.M.’s and cash registers were converted swiftly and with few glitches. The euro quickly became a major international currency: the euro bond market soon came to rival the dollar bond market; euro bank notes began circulating around the world. And the creation of the euro instilled a new sense of confidence, especially in those European countries that had historically been considered investment risks. Only later did it become apparent that this surge of confidence was bait for a dangerous trap.

Greece, with its long history of debt defaults and bouts of high inflation, was the most striking example. Until the late 1990s, Greece’s fiscal history was reflected in its bond yields: investors would buy bonds issued by the Greek government only if they paid much higher interest than bonds issued by governments perceived as safe bets, like those by Germany. As the euro’s debut approached, however, the risk premium on Greek bonds melted away. After all, the thinking went, Greek debt would soon be immune from the dangers of inflation: the European Central Bank would see to that. And it wasn’t possible to imagine any member of the newly minted monetary union going bankrupt, was it?

Indeed, by the middle of the 2000s just about all fear of country-specific fiscal woes had vanished from the European scene. Greek bonds, Irish bonds, Spanish bonds, Portuguese bonds — they all traded as if they were as safe as German bonds. The aura of confidence extended even to countries that weren’t on the euro yet but were expected to join in the near future: by 2005, Latvia, which at that point hoped to adopt the euro by 2008, was able to borrow almost as cheaply as Ireland. (Latvia’s switch to the euro has been put off for now, although neighboring Estonia joined on Jan. 1.)

As interest rates converged across Europe, the formerly high-interest-rate countries went, predictably, on a borrowing spree. (This borrowing spree was, it’s worth noting, largely financed by banks in Germany and other traditionally low-interest-rate countries; that’s why the current debt problems of the European periphery are also a big problem for the European banking system as a whole.) In Greece it was largely the government that ran up big debts. But elsewhere, private players were the big borrowers. Ireland, as I’ve already noted, had a huge real estate boom: home prices rose 180 percent from 1998, just before the euro was introduced, to 2007. Prices in Spain rose almost as much. There were booms in those not-yet-euro nations, too: money flooded into Estonia, Latvia, Lithuania, Bulgaria and Romania.

It was a heady time, and not only for the borrowers. In the late 1990s, Germany’s economy was depressed as a result of low demand from domestic consumers. But it recovered in the decade that followed, thanks to an export boom driven by its European neighbors’ spending sprees.

Everything, in short, seemed to be going swimmingly: the euro was pronounced a great success.

Then the bubble burst.

You still hear people talking about the global economic crisis of 2008 as if it were something made in America. But Europe deserves equal billing. This was, if you like, a North Atlantic crisis, with not much to choose between the messes of the Old World and the New. We had our subprime borrowers, who either chose to take on or were misled into taking on mortgages too big for their incomes; they had their peripheral economies, which similarly borrowed much more than they could really afford to pay back. In both cases, real estate bubbles temporarily masked the underlying unsustainability of the borrowing: as long as housing prices kept rising, borrowers could always pay back previous loans with more money borrowed against their properties. Sooner or later, however, the music would stop. Both sides of the Atlantic were accidents waiting to happen.

In Europe, the first round of damage came from the collapse of those real estate bubbles, which devastated employment in the peripheral economies. In 2007, construction accounted for 13 percent of total employment in both Spain and Ireland, more than twice as much as in the United States. So when the building booms came to a screeching halt, employment crashed. Overall employment fell 10 percent in Spain and 14 percent in Ireland; the Irish situation would be the equivalent of losing almost 20 million jobs here.

But that was only the beginning. In late 2009, as much of the world was emerging from financial crisis, the European crisis entered a new phase. First Greece, then Ireland, then Spain and Portugal suffered drastic losses in investor confidence and hence a significant rise in borrowing costs. Why?

In Greece the story is straightforward: the government behaved irresponsibly, lied about it and got caught. During the years of easy borrowing, Greece’s conservative government ran up a lot of debt — more than it admitted. When the government changed hands in 2009, the accounting fictions came to light; suddenly it was revealed that Greece had both a much bigger deficit and substantially more debt than anyone had realized. Investors, understandably, took flight.

But Greece is actually an unrepresentative case. Just a few years ago Spain, by far the largest of the crisis economies, was a model European citizen, with a balanced budget and public debt only about half as large, as a percentage of G.D.P., as that of Germany. The same was true for Ireland. So what went wrong?

First, there was a large direct fiscal hit from the slump. Revenue plunged in both Spain and Ireland, in part because tax receipts depended heavily on real estate transactions. Meanwhile, as unemployment soared, so did the cost of unemployment benefits — remember, these are European welfare states, which have much more extensive programs to shield their citizens from misfortune than we do. As a result, both Spain and Ireland went from budget surpluses on the eve of the crisis to huge budget deficits by 2009.

Then there were the costs of financial clean-up. These have been especially crippling in Ireland, where banks ran wild in the boom years (and were allowed to do so thanks to close personal and financial ties with government officials). When the bubble burst, the solvency of Irish banks was immediately suspect. In an attempt to avert a massive run on the financial system, Ireland’s government guaranteed all bank debts — saddling the government itself with those debts, bringing its own solvency into question. Big Spanish banks were well regulated by comparison, but there was and is a great deal of nervousness about the status of smaller savings banks and concern about how much the Spanish government will have to spend to keep these banks from collapsing.

All of this helps explain why lenders have lost faith in peripheral European economies. Still, there are other nations — in particular, both the United States and Britain — that have been running deficits that, as a percentage of G.D.P., are comparable to the deficits in Spain and Ireland. Yet they haven’t suffered a comparable loss of lender confidence. What is different about the euro countries?

One possible answer is “nothing”: maybe one of these days we’ll wake up and find that the markets are shunning America, just as they’re shunning Greece. But the real answer is probably more systemic: it’s the euro itself that makes Spain and Ireland so vulnerable. For membership in the euro means that these countries have to deflate their way back to competitiveness, with all the pain that implies.

The trouble with deflation isn’t just the coordination problem Milton Friedman highlighted, in which it’s hard to get wages and prices down when everyone wants someone else to move first. Even when countries successfully drive down wages, which is now happening in all the euro-crisis countries, they run into another problem: incomes are falling, but debt is not.

As the American economist Irving Fisher pointed out almost 80 years ago, the collision between deflating incomes and unchanged debt can greatly worsen economic downturns. Suppose the economy slumps, for whatever reason: spending falls and so do prices and wages. But debts do not, so debtors have to meet the same obligations with a smaller income; to do this, they have to cut spending even more, further depressing the economy. The way to avoid this vicious circle, Fisher said, was monetary expansion that heads off deflation. And in America and Britain, the Federal Reserve and the Bank of England, respectively, are trying to do just that. But Greece, Spain and Ireland don’t have that option — they don’t even have their own monies, and in any case they need deflation to get their costs in line.

And so there’s a crisis. Over the course of the past year or so, first Greece, then Ireland, became caught up in a vicious financial circle: as potential lenders lost confidence, the interest rates that they had to pay on the debt rose, undermining future prospects, leading to a further loss of confidence and even higher interest rates. Stronger European nations averted an immediate implosion only by providing Greece and Ireland with emergency credit lines, letting them bypass private markets for the time being. But how is this all going to work out?


Some economists, myself included, look at Europe’s woes and have the feeling that we’ve seen this movie before, a decade ago on another continent — specifically, in Argentina.

Unlike Spain or Greece, Argentina never gave up its own currency, but in 1991 it did the next best thing: it rigidly pegged its currency to the U.S. dollar, establishing a “currency board” in which each peso in circulation was backed by a dollar in reserves. This was supposed to prevent any return to Argentina’s old habit of covering its deficits by printing money. And for much of the 1990s, Argentina was rewarded with much lower interest rates and large inflows of foreign capital.

Eventually, however, Argentina slid into a persistent recession and lost investor confidence. Argentina’s government tried to restore that confidence through rigorous fiscal orthodoxy, slashing spending and raising taxes. To buy time for austerity to have a positive effect, Argentina sought and received large loans from the International Monetary Fund — in much the same way that Greece and Ireland have sought emergency loans from their neighbors. But the persistent decline of the Argentine economy, combined with deflation, frustrated the government’s efforts, even as high unemployment led to growing unrest.

By early 2002, after angry demonstrations and a run on the banks, it had all fallen apart. The link between the peso and the dollar collapsed, with the peso plunging; meanwhile, Argentina defaulted on its debts, eventually paying only about 35 cents on the dollar.

It’s hard to avoid the suspicion that something similar may be in the cards for one or more of Europe’s problem economies. After all, the policies now being undertaken by the crisis countries are, qualitatively at least, very similar to those Argentina tried in its desperate effort to save the peso-dollar link: harsh fiscal austerity in an effort to regain the market’s confidence, backed in Greece and Ireland by official loans intended to buy time until private lenders regain confidence. And if an Argentine-style outcome is the end of the line, it will be a terrible blow to the euro project. Is that what’s going to happen?

Not necessarily. As I see it, there are four ways the European crisis could play out (and it may play out differently in different countries). Call them toughing it out; debt restructuring; full Argentina; and revived Europeanism.

Toughing it out: Troubled European economies could, conceivably, reassure creditors by showing sufficient willingness to endure pain and thereby avoid either default or devaluation. The role models here are the Baltic nations: Estonia, Lithuania and Latvia. These countries are small and poor by European standards; they want very badly to gain the long-term advantages they believe will accrue from joining the euro and becoming part of a greater Europe. And so they have been willing to endure very harsh fiscal austerity while wages gradually come down in the hope of restoring competitiveness — a process known in Eurospeak as “internal devaluation.”

Have these policies been successful? It depends on how you define “success.” The Baltic nations have, to some extent, succeeded in reassuring markets, which now consider them less risky than Ireland, let alone Greece. Meanwhile, wages have come down, declining 15 percent in Latvia and more than 10 percent in Lithuania and Estonia. All of this has, however, come at immense cost: the Baltics have experienced Depression-level declines in output and employment. It’s true that they’re now growing again, but all indications are that it will be many years before they make up the lost ground.

It says something about the current state of Europe that many officials regard the Baltics as a success story. I find myself quoting Tacitus: “They make a desert and call it peace” — or, in this case, adjustment. Still, this is one way the euro zone could survive intact.

Debt restructuring: At the time of writing, Irish 10-year bonds were yielding about 9 percent, while Greek 10-years were yielding 12½ percent. At the same time, German 10-years — which, like Irish and Greek bonds, are denominated in euros — were yielding less than 3 percent. The message from the markets was clear: investors don’t expect Greece and Ireland to pay their debts in full. They are, in other words, expecting some kind of debt restructuring, like the restructuring that reduced Argentina’s debt by two-thirds.

Such a debt restructuring would by no means end a troubled economy’s pain. Take Greece: even if the government were to repudiate all its debt, it would still have to slash spending and raise taxes to balance its budget, and it would still have to suffer the pain of deflation. But a debt restructuring could bring the vicious circle of falling confidence and rising interest costs to an end, potentially making internal devaluation a workable if brutal strategy.

Frankly, I find it hard to see how Greece can avoid a debt restructuring, and Ireland isn’t much better. The real question is whether such restructurings will spread to Spain and — the truly frightening prospect — to Belgium and Italy, which are heavily indebted but have so far managed to avoid a serious crisis of confidence.

Full Argentina: Argentina didn’t simply default on its foreign debt; it also abandoned its link to the dollar, allowing the peso’s value to fall by more than two-thirds. And this devaluation worked: from 2003 onward, Argentina experienced a rapid export-led economic rebound.

The European country that has come closest to doing an Argentina is Iceland, whose bankers had run up foreign debts that were many times its national income. Unlike Ireland, which tried to salvage its banks by guaranteeing their debts, the Icelandic government forced its banks’ foreign creditors to take losses, thereby limiting its debt burden. And by letting its banks default, the country took a lot of foreign debt off its national books.

At the same time, Iceland took advantage of the fact that it had not joined the euro and still had its own currency. It soon became more competitive by letting its currency drop sharply against other currencies, including the euro. Iceland’s wages and prices quickly fell about 40 percent relative to those of its trading partners, sparking a rise in exports and fall in imports that helped offset the blow from the banking collapse.

The combination of default and devaluation has helped Iceland limit the damage from its banking disaster. In fact, in terms of employment and output, Iceland has done somewhat better than Ireland and much better than the Baltic nations.

So will one or more troubled European nations go down the same path? To do so, they would have to overcome a big obstacle: the fact that, unlike Iceland, they no longer have their own currencies. As Barry Eichengreen of Berkeley pointed out in an influential 2007 analysis, any euro-zone country that even hinted at leaving the currency would trigger a devastating run on its banks, as depositors rushed to move their funds to safer locales. And Eichengreen concluded that this “procedural” obstacle to exit made the euro irreversible.

But Argentina’s peg to the dollar was also supposed to be irreversible, and for much the same reason. What made devaluation possible, in the end, was the fact that there was a run on the banks despite the government’s insistence that one peso would always be worth one dollar. This run forced the Argentine government to limit withdrawals, and once these limits were in place, it was possible to change the peso’s value without setting off a second run. Nothing like that has happened in Europe — yet. But it’s certainly within the realm of possibility, especially as the pain of austerity and internal devaluation drags on.

Revived Europeanism: The preceding three scenarios were grim. Is there any hope of an outcome less grim? To the extent that there is, it would have to involve taking further major steps toward that “European federation” Robert Schuman wanted 60 years ago.

In early December, Jean-Claude Juncker, the prime minister of Luxembourg, and Giulio Tremonti, Italy’s finance minister, created a storm with a proposal to create “E-bonds,” which would be issued by a European debt agency at the behest of individual European countries. Since these bonds would be guaranteed by the European Union as a whole, they would offer a way for troubled economies to avoid vicious circles of falling confidence and rising borrowing costs. On the other hand, they would potentially put governments on the hook for one another’s debts — a point that furious German officials were quick to make. The Germans are adamant that Europe must not become a “transfer union,” in which stronger governments and nations routinely provide aid to weaker.

Yet as the earlier Ireland-Nevada comparison shows, the United States works as a currency union in large part precisely because it is also a transfer union, in which states that haven’t gone bust support those that have. And it’s hard to see how the euro can work unless Europe finds a way to accomplish something similar.

Nobody is yet proposing that Europe move to anything resembling U.S. fiscal integration; the Juncker-Tremonti plan would be at best a small step in that direction. But Europe doesn’t seem ready to take even that modest step.


For now, the plan in Europe is to have everyone tough it out — in effect, for Greece, Ireland, Portugal and Spain to emulate Latvia and Estonia. That was the clear verdict of the most recent meeting of the European Council, at which Angela Merkel, the German chancellor, essentially got everything she wanted. Governments that can’t borrow on the private market will receive loans from the rest of Europe — but only on stiff terms: people talk about Ireland getting a “bailout,” but it has to pay almost 6 percent interest on that emergency loan. There will be no E-bonds; there will be no transfer union.

Even if this eventually works in the sense that internal devaluation has worked in the Baltics — that is, in the narrow sense that Europe’s troubled economies avoid default and devaluation — it will be an ugly process, leaving much of Europe deeply depressed for years to come. There will be political repercussions too, as the European public sees the continent’s institutions as being — depending on where they sit — either in the business of bailing out deadbeats or acting as agents of heartless bill collectors.

Nor can the rest of the world look on smugly at Europe’s woes. Taken as a whole, the European Union, not the United States, is the world’s largest economy; the European Union is fully coequal with America in the running of the global trading system; Europe is the world’s most important source of foreign aid; and Europe is, whatever some Americans may think, a crucial partner in the fight against terrorism. A troubled Europe is bad for everyone else.

In any case, the odds are that the current tough-it-out strategy won’t work even in the narrow sense of avoiding default and devaluation — and the fact that it won’t work will become obvious sooner rather than later. At that point, Europe’s stronger nations will have to make a choice.

It has been 60 years since the Schuman declaration started Europe on the road to greater unity. Until now the journey along that road, however slow, has always been in the right direction. But that will no longer be true if the euro project fails. A failed euro wouldn’t send Europe back to the days of minefields and barbed wire — but it would represent a possibly irreversible blow to hopes of true European federation.

So will Europe’s strong nations let that happen? Or will they accept the responsibility, and possibly the cost, of being their neighbors’ keepers? The whole world is waiting for the answer.

Paul Krugman is a Times columnist and winner of the 2008 Nobel Memorial Prize in Economic Sciences. His latest book is “The Return of Depression Economics and the Crisis of 2008.”