United States corporate law

The New York Stock Exchange is the major center for listing and trading shares in United States. Most corporations are, however, incorporated under the influential Delaware General Corporation Law.

United States corporate law regulates the governance, finance and power of corporations in US law. Every state and territory has its own basic corporate code, while federal law creates minimum standards for trade in company shares and governance rights, found mostly in the Securities Act of 1933 and the Securities and Exchange Act of 1934, as amended by laws like the Sarbanes-Oxley Act of 2002 and the Dodd-Frank Act of 2010. The US Constitution was interpreted by the US Supreme Court to allow corporations to incorporate in the state of their choice, regardless of where their headquarters are. Over the 20th century, most major corporations incorporated under the Delaware General Corporation Law, which offered lower corporate taxes, fewer shareholder rights against directors, and developed a specialized court and legal profession. Nevada has done the same. Twenty-four states follow the Model Business Corporation Act,[1] while New York and California are important due to their size.


At the Declaration of Independence, corporations had been unlawful without explicit authorization in a Royal Charter or an Act of Parliament of the United Kingdom. Since the world's first stock market crash (the South Sea Bubble of 1720) corporations were perceived as dangerous. This was because, as the economist Adam Smith wrote in The Wealth of Nations (1776), directors managed "other people's money" and this conflict of interest meant directors were prone to "negligence and profusion". Corporations were only thought to be legitimate in specific industries (such as insurance or banking) that could not be managed efficiently through partnerships.[2] After the US Constitution was ratified in 1788, corporations were still distrusted, and were tied into debate about interstate exercise of sovereign power. The First Bank of the United States was chartered in 1791 by the US Congress to raise money for the government and create a common currency (alongside a federal excise tax and the US Mint). It had private investors (not government owned), but faced opposition from southern politicians who feared federal power overtaking state power. So, the First Bank's charter was written to expire in 20 years. State governments could and did also incorporate corporations through special legislation. In 1811, New York became the first state to have a simple public registration procedure to start corporations (not specific permission from the legislature) for manufacturing business.[3] It also allowed investors to have limited liability, so that if the enterprise went bankrupt investors would lose their investment, but not any extra debts that had been run up to creditors. An early US Supreme Court case, Trustees of Dartmouth College v Woodward,[4] went so far as to say that once a corporation was established a state legislature (in this case, New Hampshire) could not amend it. States quickly reacted by reserving the right to regulate future dealings by corporations.[5] Generally speaking, corporations were treated as "legal persons" with separate legal personality from its shareholders, directors or employees. Corporations were the subject of legal rights and duties: they could make contracts, hold property or commission torts,[6] but there was no necessary requirement to treat a corporation as favorably as a real person.

"The Bosses of the Senate", corporate interests–from steel, copper, oil, iron, sugar, tin, and coal to paper bags, envelopes, and salt–as giant money bags looming over senators.[7]

Over the late 19th century, more and more states allowed free incorporation of businesses with a simple registration procedure.[8] Many corporations would be small and democratically organized, with one-person, one-vote, no matter what amount the investor had, and directors would be frequently up for election. However, the dominant trend led towards immense corporate groups where the standard rule was one-share, one-vote. At the end of the 19th century, "trust" systems (where formal ownership had to be used for another person's benefit) were used to concentrate control into the hands of a few people, or a single person. In response, the Sherman Antitrust Act of 1890 was created to break up big business conglomerates, and the Clayton Act of 1914 gave the government power to halt mergers and acquisitions that could damage the public interest. By the end of the First World War, it was increasingly perceived that ordinary people had little voice compared to the "financial oligarchy" of bankers and industrial magnates.[9] In particular, employees lacked voice compared to shareholders, but plans for a post-war "industrial democracy" (giving employees votes for investing their labor) did not become widespread.[10] Through the 1920s, power concentrated in fewer hands as corporations issued shares with multiple voting rights, while other shares were sold with no votes at all. This practice was halted in 1926 by public pressure and the New York Stock Exchange refusing to list non-voting shares.[11] It was possible to sell voteless shares in the economic boom of the 1920s, because more and more ordinary people were looking to the stock market to save the new money they were earning, but the law did not guarantee good information or fair terms. New shareholders had no power to bargain against large corporate issuers, but still needed a place to save. Before the Wall Street Crash of 1929, people were being sold shares in corporations with fake businesses, as accounts and business reports were not made available to the investing public.

over the enterprise and over the physical property – the instruments of production – in which he has an interest, the owner has little control. At the same time he bears no responsibility with respect to the enterprise or its physical property. It has often been said that the owner of a horse is responsible. If the horse lives he must feed it. If the horse dies he must bury it. No such responsibility attaches to a share of stock. The owner is practically powerless through his own efforts to affect the underlying property ... Physical property capable of being shaped by its owner could bring to him direct satisfaction apart from the income it yielded in more concrete form. It represented an extension of his own personality. With the corporate revolution, this quality has been lost to the property owner much as it has been lost to the worker through the industrial revolution.

-- AA Berle and GC Means, The Modern Corporation and Private Property (1932) Book I, ch IV, 64

The Wall Street Crash saw the total collapse of stock market values, as shareholders realized that corporations had become overpriced. They sold shares en masse, meaning meant companies found it hard to get finance. The result was that thousands of businesses were forced to close, and they laid off workers. Because workers had less money to spend, businesses received less income, leading to more closures and lay-offs. This downward spiral began the Great Depression. Berle and Means argued that under-regulation was the primary cause in their foundational book in 1932, The Modern Corporation and Private Property. They said directors had become too unaccountable, and the markets lacked basic transparency rules. This led directly to the New Deal reforms of the Securities Act of 1933 and Securities and Exchange Act of 1934. A new Securities and Exchange Commission was empowered to require corporations disclose all material information about their business to the investing public. Because many shareholders were physically distant from corporate headquarters where meetings would take place, new rights were made to allow people to cast votes via proxies, on the view that this and other measures would make directors more accountable. Given these reforms, a major controversy still remained about the duties that corporations also owed to employees, other stakeholders, and the rest of society.[12] After World War Two, a general consensus emerged that directors were not bound purely to pursue "shareholder value" but could exercise their discretion for the good of all stakeholders, for instance by increasing wages instead of dividends, or providing services for the good of the community instead of only pursuing profits, if it was in the interests of the enterprise as a whole.[13] However, different states had different corporate laws. To increase revenue from corporate tax, individual states had an incentive to lower their standards in a "race to the bottom" to attract corporations to set up their headquarters in the state, particularly where directors controlled the decision to incorporate. "Charter competition", by the 1960s, had led Delaware to become home to the majority of the largest US corporations. This meant that the case law of the Delaware Chancery and Supreme Court became increasingly influential. During the 1980s, a huge takeover and merger boom decreased directors' accountability. To fend off a takeover, courts allowed boards to institute "poison pills" or "shareholder rights plans", which allowed directors to veto any bid – and probably get a payout for letting a takeover happen. More and more people's retirement savings were being invested into the stock market, through pension funds, life insurance and mutual funds. This resulted in a vast growth in the asset management industry, which tended to take control of voting rights. Both the financial sector's share of income, and executive pay for chief executive officers began to rise far beyond real wages for the rest of the workforce. The Enron scandal of 2001 led to some reforms in the Sarbanes-Oxley Act (on separating auditors from consultancy work). The global financial crisis of 2007 led to minor changes in the Dodd-Frank Act (on soft regulation of pay, alongside derivative markets). However, the basic shape of corporate law in the United States has remained the same since the 1980s.