Why Raising the Minimum Wage is a Bad Idea

After a great deal of controversy surrounding the implementation of the Affordable Care Act, debate in Washington has now shifted to another issue (like it has every other time the tide has turned against the Obama Administration): “income inequality”.  As has happened during other times of economic downturn, many in Congress (mostly Democrats) as well as President Obama are now pushing for, among other things, a raise in the current minimum wage rate (currently around $7.25 an hour).  They claim that doing so will help to  give a “fair” wage to those currently struggling to get by to take care of themselves and their families.  On the surface this may seem like common sense.  “Of course we need to make sure that workers make a fair wage and can take care of their families,” some of you may say.  Unfortunately, raising the minimum wage won’t help workers, it will only hurt them.  One fallacy that economists warn against is the idea that good intentions will automatically bring about good results.  Just because something sounds like a good idea doesn’t mean that it is.  This line of thought still rings true today because, while those in favor of raising the minimum wage may have their hearts in the right place, all they’re really doing is falling into this exact fallacy.

According to the Law of Demand, a lower price on a product will make people more willing to buy said product.  Conversely, the Law of Supply states that a higher price will make people more willing to produce that product.  Put simply, producers want to sell for the highest price possible and consumers want to buy for the lowest price possible.  These same principles can be applied to the labor market.  Companies (on the demand side) want to hire workers for the lowest possible wage and workers (on the supply side) want to be hired for the highest possible wage.  In a completely free market, wages will naturally come to an equilibrium point: the highest wage a company is willing to pay and the lowest wage workers are willing to take.  At a lower wage would cause workers to refuse to work and a higher wage would cause companies to refuse to hire people.  As a result, the market has to naturally come to a point that makes both parties as happy as possible or else the system would fall apart.  In a free market, this will always work, however, problems start to arise when the government institutes a price floor (minimum wage).

The problem with setting a minimum wage is that, no matter what the rate is set to, it fails to achieve the advertised goal.  If the minimum wage is set below the aforementioned equilibrium point, then it will have no effect on what companies pay their workers because they were already being paid a higher wage to begin with.  However, if the minimum wage is set above the equilibrium pint, it creates a gap between the number of workers a company would be willing to hire at that particular wage and the number of people willing to work for that wage.  As stated above, the higher wage causes an increase in the supply of labor because more people are willing to work for that wage.  But, at the same time, the demand for labor decreases because of the increased cost and, thus, companies have no incentive to hire new workers because the cost of adding an additional worker is higher than they are willing to pay.

The other problem with raising the minimum wage (or having one in the first place) is that it only causes workers to be laid off.  Before the minimum wage increases, companies are used to paying their workers a particular wage corresponding to the value the company places on the workers ability to produce (wage is at equilibrium).  When the minimum wage is increased, the price of labor is artificially increased, meaning that the company is now paying more money for the same labor despite making the same amount of profit.  As a result, the company has to find a way to compensate for this additional cost.  One option would be to raise the price on their products, but that would cause fewer people to buy their product and those that would buy it would buy it less often.  So, the only viable solution would be to fire workers so that they pay the increased wage to fewer people.  This means that an increase in the minimum wage will bring about higher unemployment figures and less productive companies.  Granted, that only applies if it’s raised above the equilibrium point–otherwise there won’t be any effect.  Even then there’s still no point in raising the minimum wage.

Some of you may ask, “Then, how do we help the workers who are barely making enough to get by, or are we supposed to just abandon them?”  To that I would respond let the free market take its course.  That may not seem like an immediate solution, but I assure you that if the government would stop interfering with the private sector, things would go much more smoothly.  When companies are free to produce with minimal government intervention, they can produce more of their product which lets them sell more and thus make more money to hire more workers and/or raise their current workers’ wage naturally.  You may say that my logic is flawed because companies would have no incentive to raise wages on their own, but as companies produce more (and make more money), labor becomes more valuable and as such, workers will want their wages adjusted accordingly.  If companies refused to raise wages, they would run the risk of having their workers quit.  In other words, greater output and profits by companies will naturally raise the equilibrium point, and allow the free market to do its job to improve the economy and wages.

Raising the minimum wage won’t do anything to help workers as many people claim.  It will either prove to be an ineffective waste of time because it will be set below market equilibrium or it will force companies to lay off workers because the price of labor will be artificially raised above equilibrium to a point where employers will be unwilling to keep the same number of employees.  Raising the minimum wage may seem like the right thing to do. But, in reality, it is a harmful approach to economic policy that has been tried before by the likes of Franklin Roosevelt and Jimmy Carter and failed in both cases.  Problems with “income inequality” don’t come from companies being greedy or an inherent unfairness in the system.  They come from the government making arbitrary wage and salary decisions and redistributing wealth the way it sees fit rather than letting the free market address these concerns.  So, if Congress really wants to help workers make a decent wage, it should back off and let free enterprise do the job for them.

The Importance of the Filibuster

Yesterday Senate Majority Leader Harry Reid called for a vote to change the Senate’s rules to allow cloture to be invoked on minority filibusters of presidential appointments with a majority vote rather than the three-fifths (60 out of 100) vote that was previously required.  While the issue of filibuster reform isn’t new to the Senate (Reid threatened to change the rules earlier this year) it is the first time that the rule change has actually occurred.  Senator Reid, along with numerous Democrats, has argued that this rule change is necessary due to the drastic increase in the number of filibusters used over the past few years and in particular during President Obama’s time in the White House.  Republicans, on the other hand, argue that, although this change only applies to presidential appointments and not legislation or Supreme Court nominations the rule change would haunt the chamber for years to come.  Senate Minority Leader Mitch McConnell even went so far as to call the rule change “nothing more than a power grab”.  It’s a controversial topic and one that, in my opinion, does merit some discussion.  However, this rule change is not the way to go about filibuster reform.  It may make the Senate function more quickly, but at the cost of the rights of the minority.

As a quick refresher, a filibuster is a tool used by the minority party to delay debate on or passage of a bill.  In a filibuster, a senator may speak on whatever topic he wants to and may speak for as long as he wishes so long as he speaks continuously and does not give up the floor to another senator.  The Founders included it in the original Senate rules as part of the legislative process for the purpose of giving a greater voice to the minority party.  As I noted previously, the Founders wanted the legislative process to be slow and deliberate rather than fast and rash.  They even mention the fear of a tyrannical majority overriding the minority several times in The Federalist Papers.  This made the filibuster that much more useful.  Not only would it allow the minority to voice its opinion on legislation with which they greatly disagreed, but it would keep laws from passing too quickly and potentially causing damage to the economy or, say, a government organization.  It would also prevent the president and his party from stacking the federal court system and/or the Supreme Court with judges that would simply rubber stamp their agenda.  This rule change, however, threatens to eliminate this vital part of the legislative process all together.

If all that is required to invoke cloture, forcibly ending a filibuster through a vote, is 51 votes then the majority party can essentially stop any filibuster by the minority that they wanted to, which would most likely be all of them.  As such, the majority would be able to pass any bill or approve any presidential appointment or Supreme Court nomination without the minority being able to do anything about it.  This gives the majority an unprecedented amount of power that, if misused, may turn tyrannical, something the Founding Fathers wanted to avoid.  The reason that the three-fifths mandate was included was to make it more difficult for the majority to silence the minority and thus let the minority voice its opinion more freely.  Now I recognize the fact that the rule change that occurred yesterday only affects the power to filibuster presidential appointments.  However, the same principle applies; the Democrats now have the ability to approve all of President Obama’s appointments without the possibility of Republicans getting the chance to bring about a reasonable debate as to the appointee’s qualifications.

That aside, though, Harry Reid’s push to restrict the minority’s ability to filibuster introduces other problems that may result in a more tyrannical rule of the majority over the minority.  The first problem is that by changing the rules regarding presidential appointments, Reid has opened the door for the same kind of rule change for legislation or Supreme Court nominees.  He has proven with this change that he has the votes to make it happen.  What would stop him from calling such a vote?  Some may argue that Senator Reid only intended this to be a temporary measure to reduce the number of Republican filibusters, but that brings me to the second problem with this rule change.  It is now highly unlikely that the rules will be changed back to their original form.  Again, you may say that this is only a temporary measure, but let me ask you this: what incentive would any majority party have to change the rules back?  The answer is that they would have none.  If the majority party (regardless of which party it is) can pass a bill or approve a nomination without the minority party being able to delay or prevent the bill or nomination from coming to a vote, why would they change the rules to allow the minority to do just that?  Exactly, they wouldn’t.

I’m not saying that the filibuster has absolutely no drawbacks; both parties have shown that, given the chance, they will abuse their power to filibuster simply to spite their opponent.  However, the costs to the democratic process are far too great to essentially eliminate the minority’s ability to filibuster all together.  Putting these kinds of restrictions on the filibuster will only serve to silence the minority in the long run.  There are other ways to go about filibuster reform.  The Senate could change the rules to require speakers to only talk about the bill or nominee being debated at the time.  The Senate could even put a time limit on filibusters so that the minority could still voice its opinion without completely preventing the Senate from doing its job.  Stripping the minority of their ability to filibuster will only lead to the majority exerting its will upon the Senate with no regard for the rights of the minority.  As Thomas Jefferson once said, “All, too, will bear in mind this sacred principle, that though the will of the majority is in all cases to prevail, that will to be rightful must be reasonable; that the minority possess their equal rights, which equal law must protect, and to violate would be oppression.”

The Myth of the New Deal

This is a research paper that I just finished writing for my composition class that I thought was relevant enough to some of today’s political debates for me to post here.  Hopefully, you will find it insightful, though I will warn you that it’s quite a bit longer than anything else I’ve posted before.

Following the economic boom of 1921-1929, the period known as “The Roaring Twenties”, the United States fell into a state of stagnation preceded by a stock market crash in October of 1929 and resulting in an economic collapse that lasted for more than a decade.  The period following this collapse would later be dubbed the Great Depression and be considered the worst economic crisis in the history of the United States.  Decades following the depression; economists, historians, and politicians continue to debate both the underlying causes of the Great Depression and the factors that contributed to the economic recovery of the 1940s that marked that depression’s end.  Although those topics have become less prevalent in most political discussions of the modern day, the implications of these debates remain relevant as they may provide insight into potential solutions to America’s current economic downturn.  As such, this paper will seek to answer the question at the heart of the aforementioned debates: to what extent did government policy cause the Great Depression and lead to the economic recovery of the late 1940s?

This paper will examine the economic policy of “The Roaring Twenties”, any change in policy that may have occurred at the beginning of the Hoover Administration, and the effects of those changes, if any, on the United States economy.  In addition, this paper will examine the economic policies instituted during Franklin Roosevelt’s presidency, often referred to collectively as the New Deal, similarities to or differences with previous economic policy, and their effectiveness in restoring the American economy in the long run.  As a result, this paper intends to show that the federal government directly contributed to the onset of the Great Depression by adopting more burdensome policies for businesses both before and after the market crash in 1929, in contrast with the minimalistic, business-friendly policies of the previous decade.  Furthermore, this paper will show that the New Deal policies of the Roosevelt Administration only served to hinder America’s economic growth and that recovery did not begin until President Roosevelt was forced to unleash American industry shortly before the United States became involved in World War II.

After taking office in 1921, President Harding employed the same pro-business, free market economic policies that had allowed America to prosper in the past.  Harding believed that, with regards to the economy, “We need vastly more freedom than we do regulation” (Murray 171).  Harding also proclaimed in a message to Congress in April of 1920 that, “I have said to the people we meant to have less of Government in business as well as more business in Government” (Murray 172).  This statement would summarize his administration’s entire economic policy.   One of the first actions Harding took as president was to have his Treasury Secretary, Andrew Mellon, conduct a study to examine the effects of increased taxes on tax revenues.  The study indicated that higher taxes “…put a pressure on the tax payer to withdraw his capital from productive business and invest in tax exempt securities…” (Folsom 128).

After the study had concluded, Harding went about lowering the tax rates for all citizens, including the wealthiest Americans.  Harding pushed for an atmosphere of greater cooperation between government and business rather than the adversarial approach that would be taken later that same decade.  As a result of Harding’s economic policies, tax revenues more that doubled, rising from “roughly $300 million to $700 million” (De Rugy para. 5) within the first year of his presidency.  At the same time, Harding’s tax cuts allowed the American economy to grow throughout the 1920s as the nation’s Gross National Product (GNP) grew at a rate of 4.7 percent and unemployment fell from 6.7 to 3.2 percent (De Rugy para. 6).  This was due to the increased incentives to work, save, hire, and invest created as a result of people, both rich and poor, to keep more of their own money to use as they wished due to the tax cuts.

In addition to increasing government revenue through lowered tax rates, Harding managed to limit government spending in order to keep the national debt and deficit under control.  One instance of this sort of fiscal responsibility was when he vetoed a bill that had come to his desk for the purpose of giving more benefits to World War I veterans. In response to the criticism he received, Harding stated that although the country owed its veterans more than it could ever pay, the United States simply could not afford to give more benefits to anyone.  Public opinion was not in Harding’s favor in this matter, however, but his actions only helped in reducing America’s debt and keeping federal spending under control.

When Harding died of a heart attack in 1923, his vice president, Calvin Coolidge succeeded him and continued many of his established policies.  After Coolidge was inaugurated, he quickly became famous for his so-called “active inactivity” on the economy.  “In his Inaugural he asserted that the country had achieved ‘a state of contentment seldom before seen,’ and pledged himself to maintain the status quo” (whitehouse.gov para. 6).  Like his predecessor, Coolidge strongly opposed the federal government interfering in order to keep the economic boom of the 20s in check.  He continued to call for tax cuts and little federal regulation or other means of interference so as not to stunt America’s economic prosperity.  As a result, by 1924, he was hailed as having brought the country into what was called “Coolidge prosperity”.  This prosperity would last until the stock market crashed in 1929, after Coolidge had left office and Herbert Hoover had taken over.  While Hoover has received much of the blame for the onset of the Great Depression, it has been for taking a “minimalistic” approach to government intervention.  However, historical records seem to indicate otherwise.  For the first time in American history, the government stepped in as part of an attempt to solve the country’s economic woes and, as Gene Smiley points out, “What failed in the 1930s were governments, in their eager-ness to direct economic activity to achieve political ends—ends that were often contradictory” (Smiley para. 10).

Early in the year 1929, many European countries had stopped paying off their loans to the United States, leaving America with a large debt left to repay despite what Presidents Harding and Coolidge had done to reduce it.  Compounding this damage was the passage of the Smoot-Hawley Tariff Act, which placed tariffs on thousands of imported goods and increased tariffs on hundreds more.  President Hoover received numerous letters imploring him to veto the legislation, but he signed it into law regardless, resulting foreign countries placing higher tariffs on American goods being exported to those countries. In addition, Hoover attempted to avert economic crisis by instituting policies contrary to the ideas employed by his predecessors.  He raised taxes and urged companies to keep wages artificially high, although he did not pass any law establishing a minimum wage.  Instead, Hoover induced higher prices to make firms more willing to produce.  While Hoover intended to maintain spending across the consumer spectrum, most were still feeling the effects of the stock market crash and were unwilling or unable to pay inflated prices for goods.  As a result, foreign trade in the US stagnated, as did American industry.  To complement the president’s mistakes, the Federal Reserve responded to the market crash by “…cutting the money supply by nearly a third, thus choking off hopes of a recovery. Consequently, many banks suffering liquidity problems simply went under…” (Investopedia para. 4) resulting in the rampant bank closures in the early years of the depression.

Herbert Hoover’s presidency marked a change in economic policy from the business-friendly policies of the Harding and Coolidge Administrations to a more interventionist approach, especially following the stock market crash in 1929.  This change in policy would continue through the duration of Hoover’s presidency and, ultimately, do little to alleviate the economic downturn America was experiencing.  In the presidential election of 1932, Franklin Delano Roosevelt promised to restore America’s former prosperity and to put an end to the period of unchecked private sector growth that he blamed for the depression.  He claimed that the depression was brought about “primarily…because rulers of the exchange of mankind’s goods have failed through their own stubbornness and their own incompetence” and that by “…direct recruiting by the Government itself…” and “…engaging on a national scale in a redistribution…” (Roosevelt para. 4 and 9) America’s economy could be restored.

However, despite Roosevelt’s claims that his philosophy on the role of government in the economy was immensely similar to that of Herbert Hoover.  As such, President Roosevelt began a series of government programs, later called the New Deal, modeled after the economic philosophy of John Maynard Keynes, an economist who believed that it was necessary for the government to take an active role in reversing economic downturns, and designed to reduce unemployment, initiate several public works projects, and reign in the private sector, particularly the banking industry, so as to avoid another stock market crash as well as ensure that the big capitalists did not take advantage of the consumer or their workers.  One of the first measures taken by Roosevelt was the “bank holiday” in which every bank in the nation was closed down for one week so they could be “screened” by the federal government.

At the end of the one-week period, only the banks that the government had determined were stable enough to continue functioning would be allowed to reopen.  The purpose of the holiday was to ensure that the people’s money would be left in the most stable banks.  However, the bank holiday only hurt the industry as a whole.  “Banks needed permission from the secretary of the Treasury to do anything.  Businesses were undoubtedly reluctant to accept checks because banks couldn’t clear checks” (Powell 54).  In contrast, during the Panic of 1907, industrialist J P Morgan took control of the bank rescue and allowed them to continue to clear checks despite their being closed resulting in much more rapid, efficient recovery.  As Murray Newton Rothbard asserts, “The laissez-faire method would have permitted the banks to close…” and “…be transferred to the ownership of their depositors.  There would have been a vast, but rapid, deflation, with the money supply falling to virtually 100 percent of the nation’s gold stock” (Rothbard 329).

Another measure taken by Roosevelt during the Great Depression was to increase the amount of revenue available to the government for public works projects through increased taxes.  Roosevelt chose to keep the excise taxes established under Hoover, introduce new taxes such as the social security tax, and raise taxes on the wealthiest Americans in an attempt to “…equalize wealth, which Roosevelt thought was especially important during such a time of economic hardship” (Folsom 131).  Unfortunately, these additional taxes did little to help improve the economy.  In fact, they had the opposite effect, putting a greater burden on lower earners as well as corporations and reducing the amount of revenue that the government took in.

The burden of the social security tax–a small tax on income meant to fund the Social Security System and provide a safety net for those unable or too old to work–fell hardest on low-income families since only the first $3,000 of income was subject to social security taxes.  Tax rates on the rich were increased as well, with the highest marginal tax rate raised “…to 79 percent, the highest in US history” (Folsom 128).  The increased tax rate added to the financial burden carried by the top earners in America, and, just as Andrew Mellon predicted in his aforementioned study, more capital was driven out of the economy and thus brought in little revenue for the federal government.  Ultimately, Roosevelt’s tax policy did little to balance the budget and instead brought about growing government expenditures, mostly on public works projects meant to provide relief for those left unemployed by the onset of The Great Depression.

Perhaps the most notable programs enacted under he New Deal were the previously mentioned public works projects and government agencies set up to rebuild the economy.  One of these agencies was the Agricultural Adjustment Administration (AAA), which was established in order to ensure that the farming industry remained solvent throughout the depression.  To do so, the AAA offered subsidies to farmers who were willing to destroy a percentage of their product and, in some cases, not grow their crops in the first place.  This procedure was meant to keep the price of crops high in order to keep those farmers in business as well as maintain high wages.  In practice, however, these high prices that were put in place strained an America that desperately needed inexpensive food.  As such, most middle class citizens could not afford these higher prices, and food shortages continued into the late 1930s.  These high wages also “…led to further job loss, particularly in manufacturing” (Cole para. 9).  Thus, government action to artificially induce higher prices and wages only served to add to unemployment “…because companies couldn’t afford to keep large payrolls at the rates set by the government” (Investopedia para. 10).

Public works projects proved to be failures as well; while many succeeded in creating jobs, the vast majority of these were temporary positions in construction projects, which would end within a few years.  Many public works projects also failed to increase production of valuable goods as most of the projects focused on rebuilding America’s infrastructure and creating jobs regardless of where those jobs were actually needed.  As a result, much of the additional work force was put into areas that did not greatly contribute to economic growth.  As Professor Joab Corey of Florida State University’s Department of Economics states, “If you pay half the unemployed people to dig holes and the other half to fill them up, everybody’s going to be employed, but nothing is going to be produced.”  President Roosevelt’s efforts to rebuild America’s economy through government intervention proved unsuccessful and in some cases counterproductive.

It was only when it became necessary to increase America’s defense due to the threat posed by Germany, its allies in Europe, and Japan that Roosevelt “…wanted lots of things made inexpensively, and pushed wages and prices below market levels” (Investopedia para. 11) so as to get businesses to produce goods to be used in war.  As a result, businesses had more capital to put toward production and, thus, fuel the economy.  With this increase in production, and the federal government purchasing more and more war goods, businesses were able to hire more workers and increase wages naturally as the market began to improve and unemployment dropped.  Later “when the war finished, the trade routes remained open and the post-war era went from recovery to a bull run in a few short years” (Investopedia para. 11) resulting in the economic prosperity America experienced in the late 1940s and continuing through the 1950s.

Ultimately, there was no failure of private business or free enterprise that created the Great Depression; the market simply went through the natural process of boom and bust inherent in any free economy.  It was government intervention meant to resolve the initial stock market crash in 1929 that actually produced the depression.  In spite of this reality, though, Franklin Roosevelt blamed his predecessor not for the actions Hoover took while in office, but rather for remaining inactive, which ran counter to the truth. By blaming Hoover’s policies, Roosevelt justified his initiation of a Keynesian economic recovery.  However, the policies enacted under Roosevelt’s New Deal were largely unsuccessful and often contributed to the prolonging of the depression.  Higher taxes and wages put a greater burden on business and led to high unemployment throughout the 1930s.

Although Roosevelt was able to bring unemployment down slightly with public works projects focused on infrastructure, the jobs created were temporary and did not increase production of valuable goods.  Only during World War II, when Roosevelt reversed several of the anti-business policies of the New Deal did the economy began to recover due to the surge in production caused by the increase in the amount of capital that businesses were allowed to keep and spend on additional product.  After the war, this change in policy remained, allowing international trade to flourish once again. In short, it was the government’s policy of intervening in the private sector rather than allowing the economy to right itself through natural market forces that was directly responsible for the onset of the Great Depression; the subsequent New Deal programs that perpetuated the decline brought on by the depression.

Works Cited

1. Cole, Harold L., and Lee E. Ohanian. “How Government Prolonged The Depression.” The Wall Street Journal 2 Feb. 2009: n. pag. Print.

2. Corey, Joab. Personal Interview. 6 November, 2013

3. De Rugy, Veronique. “1920s Income Tax Cuts Sparked Economic Growth and Raised     Federal Revenues.” Cato Institute. N.p., 4 Mar. 2003. Web. 16 Nov. 2013.

4. Folsom, Burton W., Jr. New Deal or Raw Deal? New York: Threshold Editions, 2008. Print.

5. Murray, Robert K. The Harding Era. University of Minnesota, 1969. Print.

6. Powell, Jim. FDR’s Folly: How Roosevelt and His New Deal Prolonged the Great Depression. New York: Crown Forum, 2003. Print.

7. Roosevelt, Franklin. “First Inaugural Address.” Washington DC. 4 March, 1933

8. Rothbard, Murray Newton. America’s Great Depression. Princeton, NJ: Van Nostrand, 1963. Print.

9. Smiley, Gene. Rethinking the Great Depression. Chicago: I.R. Dee, 2002. Print.

10. “Warren G. Harding.” The White House. Web. 26 Apr. 2012. <http://www.whitehouse.gov/about/presidents/warrenharding&gt;.

11. “What Caused The Great Depression?” Investopedia. N.p., 26 Feb. 2009. Web. 31 Oct. 2013.

The Problem With Congress and What We Can Do About It

I would like to give credit to Navy1630 and the anonymous commenter on my debt ceiling post for inspiring me to write this.

It’s no secret that Congress isn’t very popular amongst the American people.  As of August, it has a 14% approval rating (a historical low) and that number is likely to drop even further with the current debate going on about Obamacare and the debt ceiling (see my post about the debt ceiling for more details on that).  What has often been cited both by members of Congress and by people who I have talked to as the problem is that there is too much political posturing.  I agree that both sides are incredibly stubborn on certain issues, but I don’t think that is what’s really wrong with Congress.  While bipartisanship can help resolve a particularly controversial issue in Washington, political posturing has its uses too.  In fact, Congress was designed with that in mind.

The Founders counted on disagreement, and they actually wanted it.  If the members of Congress are forced to debate their ideas and really think about the positives and negatives of a bill, they are more likely to come to a well though out conclusion, rather than rushing to pass a bill without any consideration for the impact of said bill.  Not only that, but standing up for what they believe in and what their constituents believe is why members of Congress are elected in the first place.  No, the problem isn’t that politicians dig in on their positions (like I said that can be a good thing); the problem is that many of them have become more interested in advancing their political careers than following the will of their constituents.

The main reason for this is because they are allowed to stay in Congress for as long as they are reelected.  As a result, there are a number of representatives and senators who have been in Congress for several decades.  For example, Harry Reid, the current Senate Majority Leader, will have been in office for 36 years at the end of this term.  Some may argue that we need people with a good amount of experience in office to run the legislative branch.  The problem with that statement is that, as the years go on, members of Congress tend to get out of touch with their constituents.  Thus, instead of actually doing what their constituents want, they will do whatever they think they can get away with and then try to appeal to their party’s base when the next election year comes around.

Also, if we need people with experience in the office to run Congress, why don’t we let the president stay in office for as long as they can get reelected?  Sure that’s how things used to be, but after FDR we realized that it’s all too easy for someone to be reelected as many times as he wants if he remains popular (ie. panders to the crucial demographics needed to get reelected).  And so we passed an amendment restricting the number of terms a president could have.  The Founders never envisioned people staying in office for decades, they assumed that people would serve for a term or two and then retire.  That’s why George Washington set that precedent when he refused to stay in office after two terms.  After FDR, we realized that we had to force that standard upon them for the sake of ensuring that no president could ever become he king-like figure that we had fought to rid ourselves of.

However, going back to the issue of Congress, the worst part of the problem is that we bear most of the responsibility.  We’re the ones that keep voting these people back into office.  Every election we believe their claims that they’re different than the other people in Washington or that they’ll stand up for us this term if only we’d reelect them.  Every election they say this, and every election we fall for it.  And that only serves to keep the same politicians in office who will only continue to advance their own agenda over that of the American people  The only way that this cycle can be broken is if we break it ourselves.  So, if you are unhappy with the job that your representative or senator is doing, don’t vote for them.  Don’t allow yourself to blindly accept what the incumbent tells you, make up your own mind and vote based on what you think.

Forget about party affiliation and vote based on who you believe will really represent your views in Washington.  And if the incumbent is not that person, you should seriously consider looking into their opponents positions (both in the primary and in the general election).  However, voting out the members of Congress that don’t represent their constituents’ views and values is not the only way.  The one thing that can guarantee that members of Congress can’t abuse their power or ignore their constituents for decades as they have been is to put term limits on members of Congress.  This idea may seem unlikely to be popular in Congress, and thus difficult to have made into an amendment, but that doesn’t mean that it can never come to pass.  If we all stand up to Congress and tell them to pass the amendment or be voted out of office, they will listen.  Or, if they don’t, we can find people who will.  If we stand united in our efforts, we will have our voices heard.

We the people decide what the government does, not the other way around.  In the end who you decide to vote into office and what limits you put on their ability to place their own agenda above yours directly affects the direction that this country will go in.  The future of this country rests with you, as it always has.  Only by standing up for what you believe in and standing up to those who are not doing their jobs in Washington will you be able to set this country back on the right course.  It is time for the American people to take back this country and restore the ideas and values that at one time made us the greatest country on Earth.  It is time for us to take control of our country once again.

Why Raising the Debt Ceiling is a Bad Idea

Note: The image above does not depict the actual current value of the United States’ national debt…the real figure is much higher.

As the United States’ debt slowly approaches $17 trillion (currently it’s at about $16.94 trillion) Congress is, once again, about to take up the issue of whether or not to raise the nation’s debt ceiling, which we will have to resolve sometime within the next few weeks.  President Obama and his supporters have said that raising the debt ceiling is crucial to maintaining the full faith and credit of the US, and that they will not negotiate when it comes to raising it.  Meanwhile, the GOP has said that they will raise the debt ceiling if, and only if, Democrats agree to delay implementation of the President’s signature health care law.  Both sides have accused each other of political posturing (and, let’s be honest, they’re all guilty), as they have the last few times this issue has come up.  Personally, I think that both sides are wrong.  We shouldn’t even be talking about raising the debt ceiling because the debt ceiling shouldn’t be raised regardless of any concessions one side makes.

The US national debt has become a real problem–and it’s only getting worse as Washington continues to spend beyond its means.  The spending won’t stop until Washington’s ability to increase the debt stops.  The only way to do that is to not raise the debt ceiling.  Now, raising the debt ceiling in and of itself won’t increase spending, and not raising it won’t cut spending, but either action will force Congress to make changes to the amount the US spends.  If the debt ceiling is raised, Congress can continue to spend beyond its means and increase spending until they reach the new debt ceiling.  But keeping the debt ceiling where it is will force Congress to cut spending to the point where they at least break even since they won’t be able to rack up any more debt.  Some would argue that not raising the debt ceiling would cause the US to default on its bills and become a “deadbeat nation.”  They would be right if the debt ceiling was a spending ceiling–but it’s not.

The debt ceiling does not keep the US government from spending money, it keeps it from spending more money than it takes in through taxes.  Not raising the debt ceiling will not keep the US from paying its bills, but it will limit what bills it can pay.  In other words, the US government won’t be able to spend money on all of the things it has been spending money on.  However, it can still allocate money to go to programs that it would prioritize based on how essential they are.  So, the government could pay for Social Security, military and other federal employee salaries, interest on the national debt, the Education Department budget, or whatever else it wanted to keep running.  However, because they wouldn’t be able to increase the debt, they would have to make cutbacks.  The key here is to force Congress to prioritize and then cut spending back to more reasonable levels.  That means that the government will have to live within its means just like the citizens of this great country (shocking concept, isn’t it?).

Raising the debt ceiling is only going to perpetuate more federal spending–those who act as though our debt is not a problem are, frankly, kidding themselves and us.  Not raising the debt ceiling may be painful now, but if we allow it to be raised again and allow the government to continue spending more than it takes in, things will only be worse down the road.  If you are concerned about revenue, then create an environment in which people will spend more of their money.  Cut taxes and regulations and make it more desirable for businesses to invest and spend money, and the government will get more revenue through a greater influx of tax money.  But spending more money than you have on more programs and bureaucracy than you can afford is not the way to go.

That mindset is exactly the same as the one that forced Detroit to declare bankruptcy not too long ago (see my post about Detroit for a more detailed account).  Detroit spent money it didn’t have and made promises to unions that it couldn’t afford to keep, and it cost them dearly (literally).  Now, if Detroit’s leaders want to restore the city’s economic prosperity of the 1950s, they have to start living within their means and make serious cutbacks in their budget.  Now, our federal government is facing a similar situation: either embrace a more fiscally sound approach to spending or face defaulting on its payments.  There’s a reason that no civilization has ever spent its way out of an economic downturn: it doesn’t work.  If the government is not willing to do what it takes to stop the incipient deficit spending that has contributed greatly to our debt, we may well end up like Detroit.

The Pitfalls of Political Correctness

I think it is pretty clear that, over the years, we have become a much more politically correct nation.  We try to avoid saying something offensive towards a particular group or groups of people in an attempt to maintain civil, polite debate amongst ourselves.  This is perfectly understandable; civil debate is always more desirable than both sides slinging insults and derogative terms at each other.  However, what started as an honest attempt to eliminate offensive language and frame debate in a more civilized manner has become a means of silencing one side of an argument.  As one side has been deemed “politically correct”, the other is branded as hateful and is dismissed without consideration being given to any merits that side may have.  Political correctness has transformed from a tool of social justice to a means of limiting free speech by silencing all dissenters and shaming them into conforming with what is generally considered to be inoffensive.

The most common examples of this are the abortion debate, gay marriage, and issues of race relations.  When it comes to the abortion argument, pro-lifers are constantly forced to go on the defensive as they are accused of wanting to deny women their right to make choices about their own healthcare or of being sexist or of forcing their religious views on others (many pro-lifers cite their faith as one reason they oppose abortion).  The argument is already in the favor of the pro-choice side because of the stigma attached to the pro-life stance.  It is not politically correct to make such suggestions about women’s rights and so, regardless of the fact that the pro-life argument does not involve women’s healthcare (see my post about Texas’ abortion law for more details), it is dismissed as sexist or old-fashioned.

Much the same thing can be found in the gay marriage issue.  Since it is politically correct to consider everyone equal (which just about all Americans do), anything that could be considered as infringing on that equality is automatically frowned upon.  Once again, the argument is framed in favor of those who believe in gay marriage at the expense of the other side.  As a result, anyone who supports traditional marriage is accused of being homophobic.  And, once again, the opposing argument is dismissed without a second thought.  The whole point of supporting traditional marriage, that marriage is an institution defined by the church and that the government cannot make any law regarding the definition of the institution, is missed entirely in favor of assigning a label to the argument and attempting to force the “politically correct” view upon the church.

Finally, and perhaps most notably, political correctness prevents just about any meaningful discussion with regards to race relations.  It used to be that only derogative language and other insults were politically incorrect thus leaving plenty of room for honest discussion.  Now, most any criticism of a minority group (I’ll use the black community as an example) is considered racism regardless of the intent behind the criticism or the merit of said criticism.  The real problems are ignored while an issue is made out of what could hardly be considered racist.  For example, black-on-black and black-on-white crime is much higher than white-on-black crime, and yet what do people like Al Sharpton and Jesse Jackson focus on whenever they get the chance?  They focus on people like George Zimmerman, who, while not completely innocent (see my post on the George Zimmerman verdict), was most likely not racially motivated given the fact that he both voted for Obama and started a business with a black man.

Meanwhile, two black teenagers in Kansas who murdered a white kid because they didn’t like white people never got any attention except for local news coverage.  Meanwhile, rampant black-on-black crime in places like Detroit and Chicago doesn’t draw the attention of Reverend Sharpton.  Instead, Sharpton and Jackson refuse to even address the issue and see no real reason to do so, and anyone who tries to say otherwise is accused of being racist or of ignoring the “real” problem (that white people are racist).  The focus isn’t on the problems the black community face.  The focus is instead on the problems supposedly put on the black community by white people.  It has gotten to the point where intelligent conversation is next to impossible without it devolving into a white vs black argument.  That’s what political correctness has done to our society.  Political correctness has made people fearful to speak their minds because of the potential repercussions.

Don’t get me wrong, I think that we should still be respectful to each other in regular conversation and in debate.  However, political correctness threatens the very nature of debate by silencing, or at least negatively stigmatizing, one side of the argument.  It focuses on one little aspect of someone’s argument, exaggerates an honest opinion with just as much merit as the other side, or ignores the actual point of what is being said.  People should be able to express their views without fear of being accused of being sexist or homophobic or racist or otherwise hateful.  I’m pro-life, but I don’t think of women as inferior.  I support traditional marriage, but I don’t hate homosexuals.  I believe that there are problems within the black community that need to be addressed, but I’m not racist.  I think people should be able to express their religions freely in public, but I don’t want to force Christianity on anyone.  It’s time people realized that dissent does not equal hate and that, if you really want to be politically correct, you should listen to and consider all sides of the argument.

Enough About Gun Control Already

Two days ago, Democratic lawmakers William Pascrell (NJ) and Danny Davis (IL) proposed legislation that would increase taxes on the sale of guns and ammunition that they call, “…a major investment in the protection of our children and our communities…”  They claim that the bill will lead to a decrease in gun sales and gun violence (and crime in general) while, at the same time, bringing in more revenue for the federal government.  Critics assert that the bill conflicts with the 2nd amendment and will be ineffective in reducing gun crime.  This bill represents the latest in a series of attempts by legislators to enact stricter gun control laws following the shootings in Aurora, CO and, more recently, in Sandy Hook Elementary School in Newtown, CT.  To their credit, Pascrell and Davis are right about one thing, gun sales will most likely drop if their bill passes.  However, the rest of their predictions simply aren’t grounded in the facts.  Taxing guns and ammo isn’t going to help reduce gun crime or bring in additional revenue nor will any other of the proposed gun control measures, and gun control advocates don’t seem to understand that.

Constitutional issues aside, restricting law-abiding citizens’ ability to keep and bare arms won’t bring down crime rates, it will only make those citizens easier targets.  An unarmed populace is much easier for those wishing to do harm to exploit because citizens are left with only a limited means of defending themselves.  Simply put, guns help prevent crimes.  Taking them away or limiting what guns people can own only contributes to the crime problem.  In fact, many of the cities that have rampant crime (Chicago, Detroit, and Washington DC to name a few) also have stricter gun control laws.

The reason for this pattern is that criminals don’t care about what the law says; they’ll get a gun if they really want to whether that means getting one illegally or paying that higher tax.  They will get a gun one way or another, or, if that’s not an option, they will use some other weapon.  For example ,around the same time as the Newtown shooting, a man in China broke into a school and stabbed 12 people.  He didn’t need a gun to commit that crime and neither do other criminals. That leaves the rest of us with limited access to firearms and no other options.  As a result, we are essentially powerless to stop crimes being committed in front of us.  Bottom line: having a well-armed populace deters criminals and prevents crimes.

Even the notion that this bill will bring in more revenue for the federal government is false.  By raising the tax on a product, you reduce the incentive for people to buy that product.  Thus, fewer people will buy guns and the ones that still do will do so less frequently meaning that less revenue goes into the government.  Now you could argue that the higher rate would compensate for the reduced frequency of sale, but for that to work the tax rate would have to be much higher, and then you would run into the same problem: fewer people buying.  And, as that’s happening, criminals are still getting guns illegally while the government focuses on the law-abiding citizens.

I’m not saying we should do nothing to prevent further crimes or atrocities like the Newtown shooting, but limiting our ability to defend ourselves is not the answer.  There are plenty of other things that can be done.  Some people have proposed expanding background checks, and I have no problem with that so long as it doesn’t become a gun registry.  Background checks may not keep all criminals from committing crimes with guns, but, provided that the system is run properly, it will cut off their ability to legally obtain a gun.  You can also institute harsher penalties on crimes committed with illegally obtained guns or crimes committed with guns in general.  Another idea is to reform the mental health system so it’s easier to identify and stop mentally disturbed people such as the Aurora and Newtown shooters before they act.  But, perhaps the most important thing we can do is to not disarm the populace.

Taking away people’s guns is the quickest way for any city to slowly fall into chaos as crime becomes more and more rampant.  Guns are the last form of defense against tyranny or any other form of attack; if we really want to protect our children, let’s make sure that defense stays in place.  Case in point: if you were a criminal and you had to choose between robbing a house in which the inhabitants were gun owners or one with a “Proudly Gun Free” sign in the front yard, which would you rob?  Or, perhaps a better question, if you were being attacked, would you rather have nothing but a whistle to defend yourself with?  Or would you rather have a loaded Glock 30?  Exactly.

Voter ID Laws: Setting the Record Straight

I was going to write a post about personal responsibility today, but, after hearing about this repeatedly, I felt I should get my thoughts out on the subject.

A few days ago, North Carolina Governor Pat McCrory passed a law requiring citizens to show a photo ID at polling stations meant to cut voter fraud in the state during elections.  This new law, and ones like it in other states, has drawn much criticism, mainly from Democrats and institutions such as the NAACP and ACLU.  Critics claim that the law is meant to suppress voter turnout amongst the poor, minorities, and the elderly (all of which typically vote Democrat).  They claim that the law is a discriminatory measure taken by Republicans in order to give themselves a better in future elections.  Nothing could be further from the truth.

As I said before, the main purpose of the law is to prevent voter fraud, which, despite what some may think, can be a real problem.  You need look no further than the scandal surrounding ACORN in the 2008 election and, more recently, the Cincinnati poll worker who was convicted for voting multiple times and for multiple people to know that voter fraud does happen in America.  This law isn’t trying to keep law-abiding citizens from voting, it’s trying to keep would-be criminals and dead people from voting.  As for the accusation that Republicans are trying to suppress the poor and minorities from voting, it is an absolute fabrication.  Anyone can get a free photo ID at any DMV in the state at any time during business hours, and those who don’t already have a photo ID have a year between now and the next election to get said ID.  Getting a photo ID is far from impossible for those less fortunate than most.

Regardless, this voter ID law is not forcing citizens to do anything that they wouldn’t have to do in order to buy alcohol, see an R-rated movie, buy Sudafed, get a lottery ticket, board an airplane, cash a check, or apply for welfare and public housing.  If you need to show an ID to get these services, why is it disenfranchisement to require a photo ID in order to vote?  Is it then a Republican scheme do disenfranchise minorities and poor people by making them show a photo ID for the aforementioned services as well?  Of course not.  It’s a legitimate way to ensure that an individual is who they say they are and that they are in fact eligible for that service.  If you’re worried about people not being able to get an ID, put the energy into voter registration events, not opposing a common sense piece of legislation.

Everyone recognizes that people have the unalienable right to cast their vote if they so desire, but I don’t think it’s unreasonable to verify a person’s identity when they vote to ensure they only vote once and that they are voting for themselves and not somebody else.  North Carolina’s voter ID law is aimed at the people who would seek to break the law (the people who made such a voter ID law necessary), not those who live by it.  It does not prevent anyone from voting nor does it introduce any requirements that citizens don’t have to meet in order to get other services.  It is not a means of disenfranchisement, it is a means of making sure that everyone’s vote is counted fairly and is counted once.

Detroit’s Bleak Future and What it Means for the Rest of Us

Yesterday, the city of Detroit filed for bankruptcy not one year after President Obama swore that he would not allow the Motor City to go under.  Detroit has been in trouble for some time now, and it really isn’t too surprising that it’s now filing for bankruptcy.  But, Detroit’s story goes beyond the city limits; it has implications that affect the rest of the country.  It’s about more than the workers that could lose heir pensions because the city can’t afford to pay them.  It’s about more than Detroit’s ever-growing debt, which now totals around $18 billion.  It’s about the dangers of living beyond one’s means, empowering labor unions by giving in to their every demand, and allowing the government to provide services through redistribution of wealth.

Detroit reached its peak in the 1950s with the rise of the auto industry.  At that time Detroit was the fourth largest US city with one of the highest per capita incomes.  That changed as people began to move to the suburbs, taking with them their business and taxable income, resulting in a decrease in the city’s population.  The process was accelerated with the “race riots” of the 1960s, which drove more people out of the city in an attempt to escape the violence.  With its tax base crumbling and companies moving to other major cities where it was more desirable to do business, Detroit slowly began to collapse.  However, this was not the end of the Motor City’s troubles.  Starting in the 1960s, city officials began to grant union requests for higher pensions that became harder to pay with the city’s economy declining.

High taxes and even higher spending both discouraged business, eliminated jobs (Detroit currently has an unemployment rate of 18%), and wracked up what has become an $18 billion debt.  These self-destructive policies continued over the years until, this past March, Governor Rick Snyder appointed Kevin Orr  as Emergency Manager for the city.  After looking at all of the options, he decided the best course of action was to have the city file for bankruptcy.  Ingham County judge Rosemarie Aquilina has ordered Detroit to withdraw its filing, but, regardless of how things play out, Detroit’s future is far from bright.

There is a lesson to be learned from Detroit.  The implications of Detroit’s struggle serve as a warning to the rest of the country.  Detroit brought about its own ruin by making promise after promise that it was financially incapable of keeping.  Despite a lack of revenue, city officials continued to spend money ($100 million more than the city takes in each year) on inflating the pensions of union workers beyond the city’s means of paying and borrowing money to cover other expenses.  The city continued to tax the populace and businesses at an excessive rate in an attempt to cover expenses.  But doing so resulted in decreased business, increased unemployment, and an increase in crime due to the inability to keep law enforcement properly financed.

So, what’s the lesson in all of this?  Simple: if you want your city to prosper, you must be willing to engage in fiscal responsibility.  You must be willing to live within your means.  You must be willing to rein in labor unions if they get to the point of making demands that you either cannot meet or will have a negative impact on business.  You must be willing to let the people (rich or poor) do with their money as they wish.  More money in the hands of the people ensures more business transactions and economic growth.  Finally, you must be willing to change your policy when it becomes clear that your current one is not working.  The lesson from Detroit is that if you want to bolster your city’s growth, don’t go out of your way to bring about its collapse.

The Nature of the US Constitution

Regardless of what social or economic issue our politicians debate, in the end, their disagreement comes down to one thing: how they interpret the Constitution.  One side argues that the government can do more because of implied powers, while the other claims the government should exercise only the powers expressly given to it in the Constitution.  This debate is hardly surprising.  Since it was first penned, the Constitution has been a source of debate.  The Founders debated whether it gave the government too much power.  They debated whether it was necessary to add a Bill of Rights to limit any potential infringement on the people’s rights.  And they debated the very nature of the document itself.  Were its words set in stone, meant to be taken as they were written, or were they more flexible, offering implied powers to the government?

John Adams’ Federalists tended to think the latter to be true, just as Democrats do today.  Similarly, Thomas Jefferson’s Democratic Republicans favored a more constructionist view, as do modern-day Republicans.  The debate has continued beyond the Founders time as more and more people tend to think that the Constitution changes with the times as does its meaning–that the federal court system, and in particular the Supreme Court, has a duty to interpret the Constitution relative to the times that we live in.  However, this school of thought violates what the Founders, and even Adams, envisioned when they transcribed what would become the Supreme Law of the Land.

Actually, the title “Supreme Law of the Land” gives the most obvious flaw with the notion that the Constitution’s meaning changes with the times.  At its core, the Constitution is like a series of laws meant to restrict the power of the federal government and, as with any other law, doesn’t change unless the people change it.  There is a reason the Founders included an amendment process in Article 5 of the Constitution: to provide a means of changing the document.  The Founders recognized the fact that times would change as would the social environment.  As such, they included the amendment process to allow for change.  As it turned out, amending the Constitution became a useful tool as times changed.  We amended the Constitution to abolish slavery.  We amended the Constitution to prevent the right to vote from being infringed upon on the basis of race or gender.  We amended the Constitution to keep our politicians from raising their own pay regardless of the situation the rest of the country was facing.  The amendment process was included for a reason: so that the Constitution could be changed to better fit the times and to address situations the Founders could not anticipate.

The Founders never meant for the courts to interpret the Constitution to determine what it meant at that given time; they never even granted the Judicial Branch any powers.  Article 3 mainly serves to establish the Judicial Branch.  It wasn’t until the Marbury v. Madison ruling that the Supreme Court gained its power of judicial review.  However, even with that power, the Supreme Court never claimed to have be able to interpret the Constitution.  The Supreme Court’s duty as established through the case of Marbury v. Madison as well as other rulings under John Marshall’s leadership is to interpret and review laws and other actions taken by the federal government and determine whether they are consistent with the Constitution as it is currently written.  There is no interpretation of the Constitution involved.  Essentially, the Court is supposed to remain true to the vision of the Founders or to the changes made through the amendment process.

Some argue that we cannot know what the Founders would have wanted or, in some cases, what they meant when they wrote particular parts of the Constitution and to do so would be incredibly arrogant on our part…and they would be right.  It would be arrogant of us to presume to know what exactly the Founders meant.  Fortunately, we don’t have to guess; they left a way to better understand the complexities of the Constitution: The Federalist Papers.  The Federalist Papers were written specifically for the purpose of clarifying the Constitution’s words to convince people to support its ratification.  They go into more detail about the nature of the Constitution as related to foreign invasion, domestic insurrection, taxation, disputes between states, militia, powers conferred by the Constitution, checks and balances, and the powers of the three branches of government.  Hamilton, Jay, and Madison even included multiple sections on each issue to ensure that they covered everything.  Thus, any uncertainties can be done away with by looking to the Federalist Papers (that’s why the Supreme Court often looks to them when making a ruling).  As for the amendments, they tend to be fairly self-explanatory, and if there are still uncertainties, merely look at the context in which they were written and ratified.

Times change, and as they do, our Constitution may need to be altered to address the issues of the present.  However, the Constitution was never designed to change from time-to-time on the will of a federal court judge.  By that logic, a judge could interpret into existence powers for the federal government that it was never supposed to have (completely ignoring the 10th amendment).  Similarly, one could interpret an amendment out of existence, because, after all, the Constitution changes meaning with the times, right?  What makes one interpretation any less valid than another?  It’s a slippery slope, and one we already seem to be sliding down as federal judges are increasingly relied on to resolve disputes.  This is not to say that the Constitution should not be altered at all.  As I said before, times may require adding to or changing the language of the document.  But, the Constitution says what it means and means what it says, and any change should be done through the proper means (the amendment process) and not via a judge’s opinion.