You Should (Not) Pay for My Stuff

Disclaimer: The EVA (Everyone’s a Victim…Always) Series is a group of posts in which I criticize liberal talking points by sarcastically playing devil’s advocate.  Due to the heavy amount of sarcasm, the contents of this series should not be taken literally, nor should they be interpreted as my attempt to stigmatize all liberals as thinking the same way.  If you are a liberal and are easily offended by people dissecting your world view using sarcasm, then I suggest you take the following steps: close your eyes, click your heels three times while saying “There’s no place like MSNBC,” and press the red button in the corner of your web browser.  That should prevent any harmful ideas from entering your already closed mind.

Last Monday, the Supreme Court ruled that closely held (privately owned) companies can seek an exemption, for religious reasons, from Obamacare’s contraception mandate after Hobby Lobby filed suit claiming that the mandate would force the owners to violate their religious convictions.  Many people on the left were outraged by this decision, and, personally, I couldn’t agree more with their frustration.  (How dare a privately held company try to decide what benefits it wants to provide for its employees?)  This decision will only block women’s (or should I say ‘womyn’s’?) access to contraceptives that many desperately need, forcing them to pay for it themselves rather than rely on their employers to do so.  And I thought the 1950s were over.  However, while I do think that the mandate should have been upheld, I also believe that the mandate doesn’t go far enough.  I believe that the mandate should be expanded to cover another right that clearly established in the Constitution, and, no , I don’t mean the right for womyn to be free from criticism or protest of their life choices (which the Republican Supreme Court completely ignored in their McCullen v. Coakley ruling).  No, I’m referring to something that can can prove essential to one’s health, and which every American has had the right to own since the Bill of Rights was ratified in 1791: a gun.

The Second Amendment guarantees the “right of the people to keep and bare arms” and, as such, no one has the right to prevent me from buying/owning a firearm if I want to.  So, my employer should not be able to prevent me from getting a gun for any reason whatsoever, be it a religious objection to owning guns, a moral objection to owning a “killing machine”, or a simple desire to not pay for my things.  Put another way, my decision to own a gun is none of my employer’s business, and that’s exactly why they should be required to pay for it.  You see, because the Second Amendment guarantees my right to own a gun, my employer cannot take any action that would violate said right (i.e., refusing to pay for my gun), and so they have no business determining costs they will not cover for me.  Why should I be expected to pay for my own gun(s)?  What if I can’t afford to buy my own while earning minimum wage?  Am I just supposed to live without one?  That simply isn’t fair.  It isn’t as if guns are used strictly for recreational purposes like hunting or skeet shooting; some people use guns for self-defense.  What if someone were to break into my house?  Am I just supposed to let myself get robbed?  Am I supposed to rely on the police, who could be minutes away, to rescue me?  No, I shouldn’t have to do any of those things, because my employers have a responsibility, nay, an obligation, to allow me to have the protection I need from any potential intruders.  Simply put, they do not have the right to say “no” to me or anyone else whom they employ.

But, why stop with guns?  There are many other things that are vital for an employee if they intend to maintain their job, such as transportation and auto insurance.  Transportation is essential for any employee to be able to get to work each day, and, much like a gun, it is far too important for the employee to have to worry about whether or not they can afford transportation.  Therefore, I also believe that employers should be required to cover the cost of transportation for their employees.  Just like an employee’s choice to own a gun, this is a matter in which their boss does not have the right to choose otherwise.  Companies have certain obligations as employers to provide for the well-being of their employees no matter the cost so as to ensure that everyone can freely exercise their rights to life, liberty, and happiness.  Now, I don’t believe that employers should necessarily be required to cover the cost of a car for their employees.  Rather, I believe that each employee should decide whether they want to have their own vehicle or rely on public transportation and have their employer cover the cost of whichever they choose.  And, in the event that an employee chooses to get their own car, I believe that employer’s should be required to cover the cost of auto insurance.  An employee shouldn’t have to face financial devastation because they are involved in a car accident.  What is they can’t afford to pay for the damage themselves?  What if they’re left without any means of transportation because they can’t pay to have their car fixed.  These are questions that no one should have to ask themselves.  And so, this responsibility falls to their employers who, as I mentioned previously, have the duty to ensure that their employees’ well being is maintained.

All of these measures will bring a level of choice into the employees’ lives (something severely lacking in today’s world, especially here in America).  They will no longer be limited by what they can and cannot afford.  They will instead be free to carry out their jobs without having to worry about cost (kind of like most employers nowadays).  They will be free to live out their American Dream.  Some may argue, and many did during the Hobby Lobby case, that companies shouldn’t be forced to pay for things that their employees are perfectly capable of buying themselves.  However, this is a major oversimplification of the situations that many lower level workers face.  Not every worker can afford everything they want and need someone who can afford it to cover the costs for them.  After all, everyone has the right to succeed, and, as such, no one has the right to keep others from achieving that success.  Others may argue that this would only make companies increase their prices to compensate for the extra cost.  But, so what?  If Americans have the rights to health care and firearms, then employers have the obligation to pay for it.  Why should anyone have to go without something because they can’t afford it?  They shouldn’t.  Now, employers don’t necessarily have to approve of all the decisions that their employees make, and I wouldn’t expect them to.  However, their approval should not trump an employee’s rights to healthcare, the keeping of arms, transportation, etc., by denying them the ability to exercise these rights due to financial limitations.  In short, I’m not asking that employers agree with their employee’s personal choices, I’m merely asking that they subsidize them.

Memo #5

Well, it took a few months, but I’m finally back to writing (you thought I’d given up on this, didn’t you?).  Don’t worry, I have no intention of taking such a long break again.  In addition to this memo, I have also put up the first in a new series called the EVA (Everyone’s a Victim…Always) Series.  In this series, I will be picking apart liberal arguments by using their own talking points, as well as a bit of sarcasm, against them.  For those of you who are liberal: if that doesn’t sound like something you want to read, then you might want to stay away.  So, now that I’m done plugging myself, I hope you enjoy the new post.  Thanks for sticking with me. Jake

The Walking Dead Season 4 Review

Spoiler Alert: Although I tried to keep them to a minimum, there are still spoilers in this review for this season as well as previous seasons of The Walking Dead.  Do not read this if you haven’t seen season 4 and you care about spoilers.  Otherwise, I hope you enjoy the review.

I’m going to be honest, I wasn’t really looking forward to this season of The Walking Dead.  Sure, I enjoyed the third season, but the overall inconsistency with the show’s quality from season to season made me less eager to watch season 4.  Season 1 was an excellent start despite it only being 6 episodes, but season 2 ruined most of what made the show great by limiting the main setting to one location and having the writers decide that the only way to have tension was to have every character make as many stupid decisions as possible (like when Glenn went down that well as bait), eventually resulting in their lives being put in danger.  Not only that, but the whole of season 2, especially the first half, was incredibly padded out (particularly the search for Sophia), resulting in story arcs going on much longer than they should have, simply for the sake of meeting AMC’s demands.  Season 3 was better, but it kept many of the problems of the previous season, most notably the characters’ tendency to make stupid decisions on occasion (I’m looking at you, Andrea).  But, having said all of that, I still had some hope that season 4 could be a return to form for the show  It could be a chance to bring back that same creative spark Frank Darabont brought to the table that made season 1 the best of the three.  Unfortunately, while season 4 has a few good episodes, the rest of the season ended up failing to deliver.

The most glaring issue with season 4 is that it lacks any sort of focus.  It has so many story and character threads that it doesn’t feel like a single story being told, but, rather, several smaller stories being presented one at a time.  Now, there’s nothing wrong with having a few subplots here and there, but season 4 is so disjointed that I can actually split it into three separate parts to review individually (which is what I’m going to do): the disease episodes, the Governor episodes, and the Terminus episodes.  The disease episodes, as you probably know, are the first five episodes of the season and, frankly, they are also the worst of the season.  They serve no purpose other than to kill off the expendable characters and to attempt to make Tyrese seem important.  But, (and I hope you’re reading this AMC) having a character pout for five episodes over a someone we barely got to know and then having him get in people’s face whenever they tell him to calm down doesn’t make them relevant or important…it makes them annoying.  The biggest problem I have with these episodes, though, was the revelation that Carol was the one who killed Karen and David.  It seemed incredibly random and out of character for Carol to do that rather than, say, Carl who showed us at the end of season 3 that he would kill if he thought he was justified in doing so.  I understand that Carol isn’t helpless like she was in season 1, but I don’t think she had gotten to the point that she would kill two people for being sick just yet.  Ultimately, the first five episodes came across as a pointless excuse to drastically reduce the number of people.  This actually sacrificed much of the potential those characters had.  What a waste.

However, the show does start to pick up once it reaches the Governor episodes.  Although not perfect, the Governor episodes do an excellent job of developing the Governor’s character as he walks the line between redemption and damnation, culminating in his last stand at the prison.  These episodes also provide several emotional moments in the form of a few major character deaths, something the first few episodes might have benefited from.  This does bring me to my first complaint about these episodes: Judith’s “death”.  I may sound cruel when I say this, but I thought she should have stayed dead.  Now, before you start raging in the comments, hear me out.  Her death was such a powerful moment and had such an impact, not only on Rick and Carl, but on me as a viewer that it seemed like a cop-out for the writers to have Judith miraculously survive without explaining how Tyrese rescued her (besides, the writers have shown in the past that they aren’t above killing off children).  My only other problem with these episodes is that they feel like they belonged at the end of season 3.  In fact, the show probably would have been better off if they had eliminated the disease episodes, put the Governor episodes at the end of season 3, and started season 4 with the Terminus episodes.  But, that aside, the Governor episodes were, without doubt, the highlights of the season and they showcased what the writers are capable of at their best.

Unfortunately, the show only goes downhill from that point.  Sure, the first of the Terminus episodes move the plot forward a bit, but they’re pretty lacking in terms of character development.  Some of you may argue that they explore Michone and Daryl’s back-stories, but almost nothing is said that we didn’t either already know or that we couldn’t have inferred on our own.  And even then, they dive into the back-stories for no real reason other than to inform the audience (and kill time).  Exploring a character’s back-story should never be done solely for that purpose.  Yes, you want to inform the audience, and, yes the audience wants to understand what that character has gone through, but it should be done in a way that develops the character rather than dumping exposition on the viewer.  Let me give you an example: in The Walking Dead The Video Game, the back-story of Lee, the main character, is revealed bit by bit as he recounts it to other members of his group and, while it also brings the audience up to speed, the main purpose for revealing his back-story in the game is for Lee to come clean by talking about his past and, ultimately, gaining the trust of the other group members.  Lee’s back-story serves to develop the relationship between him and other characters rather than just telling the audience what happened to Lee before the outbreak.  In the show, though, the writers use the back-stories as a means of dumping as much extraneous information into the viewers’ heads as possible when that shouldn’t have been their goal at all.  They treated Michone and Daryl’s back-stories as a means of informing the audience rather than progressing the story and the season suffered because of it.

The latter half of the season wasn’t without its high points, however.  Episode 14 develops the relationship between Carol and Tyrese, bringing Carol’s story arc from the beginning of the season full circle while also bringing Lizzie’s condition to the forefront.  It truly was a great episode and one that made more sense for Carol.  While I thought it was out of character for her to have killed Karen and David, the fact that she had to kill Lizzie at the end of that episode felt more reasonable (just look at the flowers).  With Lizzie, she was forced into a situation in which she really had no choice but to act; with Karen and David, she chose to murder two people for being sick.  Carol’s decision process matured more realistically when she was forced to kill Lizzie than when she chose to kill Karen and David.  Rather than make an irrational choice to murder two people to stop the spread of a disease, she made a rational choice to put an end to Lizzie’s sickness, which would have only spread more death.  That episode stood out along with the Governor episodes as one of the best of the season and gave me hope that things would only improve.

But, then, we got to the season finale.  Honestly, it was one of the worst season finales I’ve ever seen (though it’s nowhere near as bad as Dexter’s finale).  Now, don’t get me wrong, I didn’t think the episode was bad, the episode just didn’t feel like a season finale.  I didn’t mind that the writers went with a cliffhanger ending, but they did so without any of the season’s conflicts being resolved.  It didn’t feel like the end of a chapter, but more like a chapter was cut off at the midpoint.  Dexter’s sixth season had a cliffhanger ending, but the main source of conflict for the season (the Doomsday Killer) was dealt with by the end, bringing that chapter of Dexter’s story to a close.  Similarly, in Code Geass, the first season ends with a cliffhanger, but the battle at Lelouch’s school had ended, again, bringing that chapter of the story to a close.  In The Walking Dead, they had a cliffhanger, but the group was still split up, Beth was still missing, and we still don’t know with certainty what was going on in Terminus (though I can guess).  The finale felt more like a penultimate episode than an actual finale, and that’s keeps it from being a good finale despite being a decent episode.

In the end, The Walking Dead’s fourth season has a few good episodes, but it’s extremely lacking in terms of tension and character development, with the exception of the aforementioned episodes.  It seems that for everything that this season did right, it did another two things wrong.  This isn’t the worst season of The Walking Dead (that “honor” still belongs to season 2), but it is the most disappointing.  Season 4 had such great potential early on, but squandered it on exposition dumps in the form of Michone and Daryl’s pointless back-stories and excuses to kill off characters like Dr. Caleb.  (Remember him?  So do I…that’s why I described him so vividly earlier in the review.)  At this point, I’m seriously worried that the show may not ever recapture the tragedy and macabre revelations made the first season so great.  Only time will tell, I suppose, but the writers desperately need to refocus on the suspenseful character-driven narrative that made The Walking Dead stand out in a horde of other zombie apocalypse works.

Memo #4

I’m writing this memo because I want to apologize for not posting anything new for such a long time.  I’ve had some unexpected delays for various reasons, but rest assured that I am still working on new posts and, hopefully, I should be able to get one up within the next few days.  I have no plans of shutting down this blog, but I have some things to work around before I can post things regularly (or at least as regularly as I used to) again.  That being said, I will try my best to get new posts out as soon as I can write them.  Thank you all for your patience, and again I apologize for the delay.

Jake

Sherlock Season 3 Review

Spoiler Alert: While I will avoid spoiling events of this season, I am going to assume that if you’re reading this, you are familiar with the events of the past two seasons.  If you haven’t seen the first two seasons of Sherlock, then the short version of this review is as follows: go watch Sherlock right now.  Seriously, stop what you’re doing and go watch it.  Stop reading this review, close out of this window, and go watch it.  Are you back, yet?  OK, then let’s get on with the review.

For those of you reading this in Great Britain, yes, I know season 3 wrapped in the UK about a week ago, but I wasn’t able to get this review out then, so I figured I should at least talk about the season before it airs in America…and what a season it was.  It’s not often that a show can maintain the same level of quality throughout its lifespan (you need look no further than Dexter to see that) but Sherlock is one of those shows that manages to stay just as intriguing and just as exciting in every season.  Although it isn’t flawless, Sherlock is without doubt one of the best shows on TV right now and for good reason.  The writing is fantastic, the acting is phenomenal (in particular Benedict Cumberbatch’s portrayal of Sherlock Holmes), and the source material is expertly adapted to suit modern times.  Season 3 carries on that tradition for the most part, but there are some things that I feel keep it from being as good as the last two seasons.

My greatest concern going into this season of Sherlock was how they were going to explain how Sherlock survived his fall at the end of the last season.  Trying to explain how someone survived their apparent death can be extremely difficult and, if you don’t do it correctly, it can come off as just being a ridiculous excuse to bring back some character.  Fortunately, the explanation behind Sherlock’s “demise” made sense and, even if it wouldn’t have worked in real life, it was believable enough for me to buy it.  Granted, it takes quite a while from them to actually explain everything, but regardless I could tell that the writers put a great deal of thought into explaining things when they were putting the script together.  The same could be said for the rest of the script this season; the writing is just as good as it has been in the last two.  That being said, it does take the focus away from the crime-solving aspect of the show in favor of further developing the relationship between Sherlock and Watson.  Although, there are still crimes to be solved this season, most of the first two episodes is dedicated to its main characters.  While I don’t feel that it was poorly executed (I think just the opposite–it was nice to see a more human side of Sherlock this time around) it took away from some of the intrigue that the last two seasons had in spades.

However, the final episode introduces the main villain of the season Charles Augustus Magnussen, a man who deals in blackmail and served as an excellent replacement for Moriarty with a presence that just might rival that of Sherlock’s old foe.  Episode 3 is when the plot really starts to get interesting (not that the first two weren’t interesting) as Sherlock must, once again, use all of his skills to take down one of the greatest threats he has ever faced.  Indeed, the episode managed to deliver some truly exciting moments as well as some plot twists that I don’t think you’ll see coming (What else would you expect from Stephen Moffat?).  This episode kept me on the edge of my seat the entire time and I was wondering how they were going to carry things over into the next season.  But, as it turned out, the season had an ending that I thought could have been better.  Don’t get me wrong, it got me excited to see season 4, but it really made season 3 feel less like the beginning of a new story arc and more like a mere set up for next season.

Sherlock season 3 is a welcome return to Baker Street with the same great writing and acting of the previous two seasons.  It had plenty of memorable moments, did a much better job with the middle episode than has been in the past, and had an ending that will be sure to keep people wanting to see what happens next.  But, at the same time, the choice to put a greater focus on Sherlock and Watson’s relationship early on took away from some of the intrigue that the show is known for (though the final episode partly made up for this) and, as such, it took the season a longer time to establish the main conflict.  In the end, I suppose the best way to describe season 3 is a natural progression without escalation.  While the character development and story feel like a natural continuation of what had been done the previous season, there was no sense of escalation in terms of the main villain.  Sure Magnussen was an imposing figure and sure, he posed a great threat, but because he wasn’t introduced until the final episode, he wasn’t able to establish himself enough to feel like a greater threat than Moriarty.  Despite my few complaints though, I enjoyed season 3 immensely.  The advancements made with the characters’ relationships and the short time with Magnussen was well worth any problems this season had.  If you haven’t already, I highly recommend you watch season 3 of Sherlock; it’s an experience you won’t want to miss.

Why Raising the Minimum Wage is a Bad Idea

After a great deal of controversy surrounding the implementation of the Affordable Care Act, debate in Washington has now shifted to another issue (like it has every other time the tide has turned against the Obama Administration): “income inequality”.  As has happened during other times of economic downturn, many in Congress (mostly Democrats) as well as President Obama are now pushing for, among other things, a raise in the current minimum wage rate (currently around $7.25 an hour).  They claim that doing so will help to  give a “fair” wage to those currently struggling to get by to take care of themselves and their families.  On the surface this may seem like common sense.  “Of course we need to make sure that workers make a fair wage and can take care of their families,” some of you may say.  Unfortunately, raising the minimum wage won’t help workers, it will only hurt them.  One fallacy that economists warn against is the idea that good intentions will automatically bring about good results.  Just because something sounds like a good idea doesn’t mean that it is.  This line of thought still rings true today because, while those in favor of raising the minimum wage may have their hearts in the right place, all they’re really doing is falling into this exact fallacy.

According to the Law of Demand, a lower price on a product will make people more willing to buy said product.  Conversely, the Law of Supply states that a higher price will make people more willing to produce that product.  Put simply, producers want to sell for the highest price possible and consumers want to buy for the lowest price possible.  These same principles can be applied to the labor market.  Companies (on the demand side) want to hire workers for the lowest possible wage and workers (on the supply side) want to be hired for the highest possible wage.  In a completely free market, wages will naturally come to an equilibrium point: the highest wage a company is willing to pay and the lowest wage workers are willing to take.  At a lower wage would cause workers to refuse to work and a higher wage would cause companies to refuse to hire people.  As a result, the market has to naturally come to a point that makes both parties as happy as possible or else the system would fall apart.  In a free market, this will always work, however, problems start to arise when the government institutes a price floor (minimum wage).

The problem with setting a minimum wage is that, no matter what the rate is set to, it fails to achieve the advertised goal.  If the minimum wage is set below the aforementioned equilibrium point, then it will have no effect on what companies pay their workers because they were already being paid a higher wage to begin with.  However, if the minimum wage is set above the equilibrium pint, it creates a gap between the number of workers a company would be willing to hire at that particular wage and the number of people willing to work for that wage.  As stated above, the higher wage causes an increase in the supply of labor because more people are willing to work for that wage.  But, at the same time, the demand for labor decreases because of the increased cost and, thus, companies have no incentive to hire new workers because the cost of adding an additional worker is higher than they are willing to pay.

The other problem with raising the minimum wage (or having one in the first place) is that it only causes workers to be laid off.  Before the minimum wage increases, companies are used to paying their workers a particular wage corresponding to the value the company places on the workers ability to produce (wage is at equilibrium).  When the minimum wage is increased, the price of labor is artificially increased, meaning that the company is now paying more money for the same labor despite making the same amount of profit.  As a result, the company has to find a way to compensate for this additional cost.  One option would be to raise the price on their products, but that would cause fewer people to buy their product and those that would buy it would buy it less often.  So, the only viable solution would be to fire workers so that they pay the increased wage to fewer people.  This means that an increase in the minimum wage will bring about higher unemployment figures and less productive companies.  Granted, that only applies if it’s raised above the equilibrium point–otherwise there won’t be any effect.  Even then there’s still no point in raising the minimum wage.

Some of you may ask, “Then, how do we help the workers who are barely making enough to get by, or are we supposed to just abandon them?”  To that I would respond let the free market take its course.  That may not seem like an immediate solution, but I assure you that if the government would stop interfering with the private sector, things would go much more smoothly.  When companies are free to produce with minimal government intervention, they can produce more of their product which lets them sell more and thus make more money to hire more workers and/or raise their current workers’ wage naturally.  You may say that my logic is flawed because companies would have no incentive to raise wages on their own, but as companies produce more (and make more money), labor becomes more valuable and as such, workers will want their wages adjusted accordingly.  If companies refused to raise wages, they would run the risk of having their workers quit.  In other words, greater output and profits by companies will naturally raise the equilibrium point, and allow the free market to do its job to improve the economy and wages.

Raising the minimum wage won’t do anything to help workers as many people claim.  It will either prove to be an ineffective waste of time because it will be set below market equilibrium or it will force companies to lay off workers because the price of labor will be artificially raised above equilibrium to a point where employers will be unwilling to keep the same number of employees.  Raising the minimum wage may seem like the right thing to do. But, in reality, it is a harmful approach to economic policy that has been tried before by the likes of Franklin Roosevelt and Jimmy Carter and failed in both cases.  Problems with “income inequality” don’t come from companies being greedy or an inherent unfairness in the system.  They come from the government making arbitrary wage and salary decisions and redistributing wealth the way it sees fit rather than letting the free market address these concerns.  So, if Congress really wants to help workers make a decent wage, it should back off and let free enterprise do the job for them.

Frozen Review

Every once in a while, I see a movie trailer and don’t really think much of it, but, when I actually see the movie, I’m blown away by how good it is (as was the case with Super 8).  After I walk out of the theater, I realize that the trailers and previews that I saw didn’t even come close to doing the movie justice.  That’s what happened to me with Frozen; I went into the theater not sure what to expect and left with the same kind of feeling I had after watching such movies as Beauty and the Beast, Tangled, and The Lion King.  To borrow the words of Dean Hardscrabble from Monsters University “Frozen did something that no movie has done this year: it surprised me.”

Disney has established itself (particularly with its more recent animated movies) as being able to excel when it comes to characters and the Frozen is no exception.  Anna and Elsa (the two female leads) are what I consider to be two of the most human protagonists that Disney has introduced.  What I mean by that is neither of them are flawless (or at least nearly so) like some Disney characters of the past (I’m looking at you, Aurora); they have flaws and problems that they have to deal with, just like we do in real life.  And the way that they each handle their respective problems seem like the way a real person would act.  Both Anna and Elsa feel like they could exist in real life (minus Elsa’s snow powers, of course), like you could meet them in your everyday life.  They may be princesses by title, but that by no means makes them unrelatable.  On top of that, the three major supporting characters (Kristoff, Olaf, and Sven) do much to add to the movie.  Olaf in particular offers plenty of comic relief to balance out the more emotional moments in Frozen.  However, he does so without compromising the seriousness of any situation; when given the choice between humor and emotion, Frozen always goes with emotion.

One notable change from past Disney movies that Frozen makes is the fact that there really isn’t an ever-present villain (I know some may argue that there are two villains, but they were never as important to the story as, say, Jafar or Maleficent).  Instead, the story focuses on Anna’s quest to save her sister, which lands itself toward telling a more relatable story.  Not only that, but it shows that Disney doesn’t have to rely on   Fans of some of Disney’s past villains hoping for a new one may be disappointed, but I think that if there had been a villain like Jafar in Frozen, it would have ruined the experience.  The movie was meant to be about Anna and Elsa overcoming their problems and growing as individuals and as sisters.  Adding a villain would have just over-complicated things.  Since Frozen is a Disney movie, I have to talk about the musical numbers, and, let me tell you, it doesn’t disappoint.  Frozen has some of the best songs of any recent Disney movie, with the standouts being Let It Go and In Summer, and I would even the soundtrack up there with classics like Beauty and the Beast and The Lion King.

Disney has hit the jackpot yet again with Frozen.  They have found a way of taking the “Disney princess” movies and translating them to modern times with Tangled, and Frozen is a continuation of that success.  If this new round of animated movies is a sign of things to come, then the future sure looks bright for Disney Animation.  Frozen is without a doubt the best animation of 2013 and will certainly be a contender to be my favorite movie of the year.  It doesn’t matter if you’re young or old, everyone should be able to enjoy Frozen, especially if you’re as big a Disney fan as I am.  Having grown up watching classic Disney movies, it gives me great pleasure to be able to wholeheartedly recommend Frozen and to say without hesitation that Disney just might have another classic on their hands.

The Importance of the Filibuster

Yesterday Senate Majority Leader Harry Reid called for a vote to change the Senate’s rules to allow cloture to be invoked on minority filibusters of presidential appointments with a majority vote rather than the three-fifths (60 out of 100) vote that was previously required.  While the issue of filibuster reform isn’t new to the Senate (Reid threatened to change the rules earlier this year) it is the first time that the rule change has actually occurred.  Senator Reid, along with numerous Democrats, has argued that this rule change is necessary due to the drastic increase in the number of filibusters used over the past few years and in particular during President Obama’s time in the White House.  Republicans, on the other hand, argue that, although this change only applies to presidential appointments and not legislation or Supreme Court nominations the rule change would haunt the chamber for years to come.  Senate Minority Leader Mitch McConnell even went so far as to call the rule change “nothing more than a power grab”.  It’s a controversial topic and one that, in my opinion, does merit some discussion.  However, this rule change is not the way to go about filibuster reform.  It may make the Senate function more quickly, but at the cost of the rights of the minority.

As a quick refresher, a filibuster is a tool used by the minority party to delay debate on or passage of a bill.  In a filibuster, a senator may speak on whatever topic he wants to and may speak for as long as he wishes so long as he speaks continuously and does not give up the floor to another senator.  The Founders included it in the original Senate rules as part of the legislative process for the purpose of giving a greater voice to the minority party.  As I noted previously, the Founders wanted the legislative process to be slow and deliberate rather than fast and rash.  They even mention the fear of a tyrannical majority overriding the minority several times in The Federalist Papers.  This made the filibuster that much more useful.  Not only would it allow the minority to voice its opinion on legislation with which they greatly disagreed, but it would keep laws from passing too quickly and potentially causing damage to the economy or, say, a government organization.  It would also prevent the president and his party from stacking the federal court system and/or the Supreme Court with judges that would simply rubber stamp their agenda.  This rule change, however, threatens to eliminate this vital part of the legislative process all together.

If all that is required to invoke cloture, forcibly ending a filibuster through a vote, is 51 votes then the majority party can essentially stop any filibuster by the minority that they wanted to, which would most likely be all of them.  As such, the majority would be able to pass any bill or approve any presidential appointment or Supreme Court nomination without the minority being able to do anything about it.  This gives the majority an unprecedented amount of power that, if misused, may turn tyrannical, something the Founding Fathers wanted to avoid.  The reason that the three-fifths mandate was included was to make it more difficult for the majority to silence the minority and thus let the minority voice its opinion more freely.  Now I recognize the fact that the rule change that occurred yesterday only affects the power to filibuster presidential appointments.  However, the same principle applies; the Democrats now have the ability to approve all of President Obama’s appointments without the possibility of Republicans getting the chance to bring about a reasonable debate as to the appointee’s qualifications.

That aside, though, Harry Reid’s push to restrict the minority’s ability to filibuster introduces other problems that may result in a more tyrannical rule of the majority over the minority.  The first problem is that by changing the rules regarding presidential appointments, Reid has opened the door for the same kind of rule change for legislation or Supreme Court nominees.  He has proven with this change that he has the votes to make it happen.  What would stop him from calling such a vote?  Some may argue that Senator Reid only intended this to be a temporary measure to reduce the number of Republican filibusters, but that brings me to the second problem with this rule change.  It is now highly unlikely that the rules will be changed back to their original form.  Again, you may say that this is only a temporary measure, but let me ask you this: what incentive would any majority party have to change the rules back?  The answer is that they would have none.  If the majority party (regardless of which party it is) can pass a bill or approve a nomination without the minority party being able to delay or prevent the bill or nomination from coming to a vote, why would they change the rules to allow the minority to do just that?  Exactly, they wouldn’t.

I’m not saying that the filibuster has absolutely no drawbacks; both parties have shown that, given the chance, they will abuse their power to filibuster simply to spite their opponent.  However, the costs to the democratic process are far too great to essentially eliminate the minority’s ability to filibuster all together.  Putting these kinds of restrictions on the filibuster will only serve to silence the minority in the long run.  There are other ways to go about filibuster reform.  The Senate could change the rules to require speakers to only talk about the bill or nominee being debated at the time.  The Senate could even put a time limit on filibusters so that the minority could still voice its opinion without completely preventing the Senate from doing its job.  Stripping the minority of their ability to filibuster will only lead to the majority exerting its will upon the Senate with no regard for the rights of the minority.  As Thomas Jefferson once said, “All, too, will bear in mind this sacred principle, that though the will of the majority is in all cases to prevail, that will to be rightful must be reasonable; that the minority possess their equal rights, which equal law must protect, and to violate would be oppression.”

The Legend of Korra Book Two: Spirits Review

SPOILER ALERT!  I just can’t thoroughly review Book Two of Korra without talking about spoilers, so I’m warning you now that you shouldn’t read this review unless you’ve seen all of Book Two.  You have been warned.

Out of all the movies and TV shows coming out this year, I think Book Two of The Legend of Korra was one of the ones that I was most looking forward to.  Not only was it the continuation of the follow-up to one of my favorite shows ever, Avatar: The Last Airbender, but it was going to explore an area left mostly untouched both in Avatar and Book One of Korra: spirits and the spirit world.  This was an opportunity to take the franchise to new heights and add to the already rich lore of the Avatar universe.  Book Two had just about everything going for it, and that was what had me so excited.  Unfortunately, Book Two fails to live up to its potential in almost every regard, leaving me incredibly disappointed with the final product.  While The Legend of Korra Book Two: Spirits has a few great moments scattered throughout, it is a largely unsatisfying chapter in Avatar Korra’s story.

From the beginning, it’s clear that Book Two doesn’t quite live up to the standards of Book One.  The dialogue doesn’t feel as well-written, the dark spirits look a bit generic (much like the ghosts in Luigi’s Mansion), and Korra magically learns how to calm dark spirits down despite the fact that no one taught her how to do it.  However, the greatest letdown of Book Two is the character development or, rather, the lack thereof.  Most of the characters that I grew to love (Lin, Tenzin, Asami, etc.) were practically reduced to background characters, contributing almost nothing to the plot, and even Tenzin’s arc with Kaya and Bumi (who was a huge disappointment, by the way) felt like it was only added to pad out the story.  Even Bolin who, even though he wasn’t one of my favorite characters in Book One, had such great potential to shine in Book Two did nothing of importance until episode 12.

The preexisting relationships between the characters are handled poorly as well, most notably Mako and Korra’s.  Within a few episodes, they break up after getting into some forced argument about Mako not supporting Korra enough, and I wouldn’t have had a huge problem with it if they hadn’t basically declared their undying love for each other at the end of Book 1 or there had been some decent build up to their breakup.  I understand that all couples argue, but going from “I love you” to “I don’t want to be with you anymore” is a bit ridiculous (what is this, Twilight?).  This along with another unsuccessful romance between Mako and Asami (though I liked their relationship more this time around) and Bolin having two teased romances that came out of nowhere and went nowhere left a sour taste in my mouth as far as the old characters were concerned.

However, the writers did introduce a series of new characters, including Unalaq, Korra’s uncle who has a vast knowledge of how to deal with spirits and who later becomes one of the main villains of Book Two.  While he does bring up an interesting dilemma, mankind’s treatment of the spirits and its disregard for old traditions, he ends up becoming a pretty generic villain.  Unlike previous villains in both Avatar and Korra, Unalaq doesn’t get much in terms of background.  Sure, you find out a bit about his past, but you never really get to see his motives.  With Ozai and Azula, they were born into an evil environment and both wanted to rule over those they felt were weaker through fear.  With Zuko, he was desperate to prove himself and regain his honor.  With Amon, he wanted to take away what he thought was the source of all evil in the world.  With Unalaq, he wanted to destroy the world because he’s evil.  Because we don’t get a good look at his motives, he loses the presence that made his predecessors so menacing and intriguing.  As for the other new characters, I felt that they were underutilized except for Eska, who mainly brought some comic relief into the first few episodes.

Despite all these flaws, though, there were some things that I liked about Book Two.  I found the two episodes featuring Wan and the origin of the first Avatar fascinating even though they contradict established lore on several occasions, such as when they explain the origins of bending and the nature of the Avatar State.  The war between the two Water Tribes, as short as it was, was a great source of tension, something lacking from the main story.  I also liked Mako’s subplot as an officer in Republic City trying to find the culprit behind a series of terror attacks that appear to be the doing of the Northern Water Tribe.  And I loved that Jinora finally got to contribute something to the plot.  But, these subplots also have their own flaws.

As I mentioned before, the Wan episodes contradict preexisting lore at times and I didn’t quite buy Wan’s transformation in the first episode.  In addition, it’s never explained how Jinora knows so much about the spirits, how she knows how to solve some of the problems Korra and the others face, and why she can enter the spirit world when Tenzin can’t.  The Civil War, which looked to be one of the more exciting plot points, is dismissed for most of Book Two while these other subplots happen.  My biggest problem with them, though, is that they ultimately overshadow the main plot.  I was actually more worried about how Mako was going to avoid going to jail for the rest of his life than I was about the world coming to an end, and when your subplots are more interesting than your main plot, that’s not a good sign.  That being said, the final two episodes were amazing (even if they relied on a deus ex machina to resolve everything) and they definitely ended Book Two on a high note.  But, two good episodes, even if they are the finale, don’t make up for all the missteps of the previous twelve.

In the end, Book Two of The Legend of Korra feels rushed, which is ironic considering how long it took to be released.  It may seem like I’m going out of my way to be hard on Book 2, but I hate the fact that I have such a negative opinion of it.  I’ve been a fan of the Avatar universe from the very beginning.  I waited anxiously for The Boy in the Iceberg to air for the first time ever since I heard about it, and I kept watching through all three seasons of Avatar and Book One of Korra.  These two series are very dear to me and I never pass up an opportunity to rewatch old episodes and relive all the great moments these series have to offer.  But, it’s for those reasons that I’m so disappointed with how Book Two ended up.  It had so much potential, but it was squandered on poor character development, plot points that went nowhere, and subplots that wound up being more interesting than the main story.  Book Two may be worth watching if you’re a hardcore Avatar fan (if for no other reason than to see where the plot goes), but casual viewers may find it a turnoff to the series, which is such a shame since I know the writers are capable of so much more.

The Myth of the New Deal

This is a research paper that I just finished writing for my composition class that I thought was relevant enough to some of today’s political debates for me to post here.  Hopefully, you will find it insightful, though I will warn you that it’s quite a bit longer than anything else I’ve posted before.

Following the economic boom of 1921-1929, the period known as “The Roaring Twenties”, the United States fell into a state of stagnation preceded by a stock market crash in October of 1929 and resulting in an economic collapse that lasted for more than a decade.  The period following this collapse would later be dubbed the Great Depression and be considered the worst economic crisis in the history of the United States.  Decades following the depression; economists, historians, and politicians continue to debate both the underlying causes of the Great Depression and the factors that contributed to the economic recovery of the 1940s that marked that depression’s end.  Although those topics have become less prevalent in most political discussions of the modern day, the implications of these debates remain relevant as they may provide insight into potential solutions to America’s current economic downturn.  As such, this paper will seek to answer the question at the heart of the aforementioned debates: to what extent did government policy cause the Great Depression and lead to the economic recovery of the late 1940s?

This paper will examine the economic policy of “The Roaring Twenties”, any change in policy that may have occurred at the beginning of the Hoover Administration, and the effects of those changes, if any, on the United States economy.  In addition, this paper will examine the economic policies instituted during Franklin Roosevelt’s presidency, often referred to collectively as the New Deal, similarities to or differences with previous economic policy, and their effectiveness in restoring the American economy in the long run.  As a result, this paper intends to show that the federal government directly contributed to the onset of the Great Depression by adopting more burdensome policies for businesses both before and after the market crash in 1929, in contrast with the minimalistic, business-friendly policies of the previous decade.  Furthermore, this paper will show that the New Deal policies of the Roosevelt Administration only served to hinder America’s economic growth and that recovery did not begin until President Roosevelt was forced to unleash American industry shortly before the United States became involved in World War II.

After taking office in 1921, President Harding employed the same pro-business, free market economic policies that had allowed America to prosper in the past.  Harding believed that, with regards to the economy, “We need vastly more freedom than we do regulation” (Murray 171).  Harding also proclaimed in a message to Congress in April of 1920 that, “I have said to the people we meant to have less of Government in business as well as more business in Government” (Murray 172).  This statement would summarize his administration’s entire economic policy.   One of the first actions Harding took as president was to have his Treasury Secretary, Andrew Mellon, conduct a study to examine the effects of increased taxes on tax revenues.  The study indicated that higher taxes “…put a pressure on the tax payer to withdraw his capital from productive business and invest in tax exempt securities…” (Folsom 128).

After the study had concluded, Harding went about lowering the tax rates for all citizens, including the wealthiest Americans.  Harding pushed for an atmosphere of greater cooperation between government and business rather than the adversarial approach that would be taken later that same decade.  As a result of Harding’s economic policies, tax revenues more that doubled, rising from “roughly $300 million to $700 million” (De Rugy para. 5) within the first year of his presidency.  At the same time, Harding’s tax cuts allowed the American economy to grow throughout the 1920s as the nation’s Gross National Product (GNP) grew at a rate of 4.7 percent and unemployment fell from 6.7 to 3.2 percent (De Rugy para. 6).  This was due to the increased incentives to work, save, hire, and invest created as a result of people, both rich and poor, to keep more of their own money to use as they wished due to the tax cuts.

In addition to increasing government revenue through lowered tax rates, Harding managed to limit government spending in order to keep the national debt and deficit under control.  One instance of this sort of fiscal responsibility was when he vetoed a bill that had come to his desk for the purpose of giving more benefits to World War I veterans. In response to the criticism he received, Harding stated that although the country owed its veterans more than it could ever pay, the United States simply could not afford to give more benefits to anyone.  Public opinion was not in Harding’s favor in this matter, however, but his actions only helped in reducing America’s debt and keeping federal spending under control.

When Harding died of a heart attack in 1923, his vice president, Calvin Coolidge succeeded him and continued many of his established policies.  After Coolidge was inaugurated, he quickly became famous for his so-called “active inactivity” on the economy.  “In his Inaugural he asserted that the country had achieved ‘a state of contentment seldom before seen,’ and pledged himself to maintain the status quo” (whitehouse.gov para. 6).  Like his predecessor, Coolidge strongly opposed the federal government interfering in order to keep the economic boom of the 20s in check.  He continued to call for tax cuts and little federal regulation or other means of interference so as not to stunt America’s economic prosperity.  As a result, by 1924, he was hailed as having brought the country into what was called “Coolidge prosperity”.  This prosperity would last until the stock market crashed in 1929, after Coolidge had left office and Herbert Hoover had taken over.  While Hoover has received much of the blame for the onset of the Great Depression, it has been for taking a “minimalistic” approach to government intervention.  However, historical records seem to indicate otherwise.  For the first time in American history, the government stepped in as part of an attempt to solve the country’s economic woes and, as Gene Smiley points out, “What failed in the 1930s were governments, in their eager-ness to direct economic activity to achieve political ends—ends that were often contradictory” (Smiley para. 10).

Early in the year 1929, many European countries had stopped paying off their loans to the United States, leaving America with a large debt left to repay despite what Presidents Harding and Coolidge had done to reduce it.  Compounding this damage was the passage of the Smoot-Hawley Tariff Act, which placed tariffs on thousands of imported goods and increased tariffs on hundreds more.  President Hoover received numerous letters imploring him to veto the legislation, but he signed it into law regardless, resulting foreign countries placing higher tariffs on American goods being exported to those countries. In addition, Hoover attempted to avert economic crisis by instituting policies contrary to the ideas employed by his predecessors.  He raised taxes and urged companies to keep wages artificially high, although he did not pass any law establishing a minimum wage.  Instead, Hoover induced higher prices to make firms more willing to produce.  While Hoover intended to maintain spending across the consumer spectrum, most were still feeling the effects of the stock market crash and were unwilling or unable to pay inflated prices for goods.  As a result, foreign trade in the US stagnated, as did American industry.  To complement the president’s mistakes, the Federal Reserve responded to the market crash by “…cutting the money supply by nearly a third, thus choking off hopes of a recovery. Consequently, many banks suffering liquidity problems simply went under…” (Investopedia para. 4) resulting in the rampant bank closures in the early years of the depression.

Herbert Hoover’s presidency marked a change in economic policy from the business-friendly policies of the Harding and Coolidge Administrations to a more interventionist approach, especially following the stock market crash in 1929.  This change in policy would continue through the duration of Hoover’s presidency and, ultimately, do little to alleviate the economic downturn America was experiencing.  In the presidential election of 1932, Franklin Delano Roosevelt promised to restore America’s former prosperity and to put an end to the period of unchecked private sector growth that he blamed for the depression.  He claimed that the depression was brought about “primarily…because rulers of the exchange of mankind’s goods have failed through their own stubbornness and their own incompetence” and that by “…direct recruiting by the Government itself…” and “…engaging on a national scale in a redistribution…” (Roosevelt para. 4 and 9) America’s economy could be restored.

However, despite Roosevelt’s claims that his philosophy on the role of government in the economy was immensely similar to that of Herbert Hoover.  As such, President Roosevelt began a series of government programs, later called the New Deal, modeled after the economic philosophy of John Maynard Keynes, an economist who believed that it was necessary for the government to take an active role in reversing economic downturns, and designed to reduce unemployment, initiate several public works projects, and reign in the private sector, particularly the banking industry, so as to avoid another stock market crash as well as ensure that the big capitalists did not take advantage of the consumer or their workers.  One of the first measures taken by Roosevelt was the “bank holiday” in which every bank in the nation was closed down for one week so they could be “screened” by the federal government.

At the end of the one-week period, only the banks that the government had determined were stable enough to continue functioning would be allowed to reopen.  The purpose of the holiday was to ensure that the people’s money would be left in the most stable banks.  However, the bank holiday only hurt the industry as a whole.  “Banks needed permission from the secretary of the Treasury to do anything.  Businesses were undoubtedly reluctant to accept checks because banks couldn’t clear checks” (Powell 54).  In contrast, during the Panic of 1907, industrialist J P Morgan took control of the bank rescue and allowed them to continue to clear checks despite their being closed resulting in much more rapid, efficient recovery.  As Murray Newton Rothbard asserts, “The laissez-faire method would have permitted the banks to close…” and “…be transferred to the ownership of their depositors.  There would have been a vast, but rapid, deflation, with the money supply falling to virtually 100 percent of the nation’s gold stock” (Rothbard 329).

Another measure taken by Roosevelt during the Great Depression was to increase the amount of revenue available to the government for public works projects through increased taxes.  Roosevelt chose to keep the excise taxes established under Hoover, introduce new taxes such as the social security tax, and raise taxes on the wealthiest Americans in an attempt to “…equalize wealth, which Roosevelt thought was especially important during such a time of economic hardship” (Folsom 131).  Unfortunately, these additional taxes did little to help improve the economy.  In fact, they had the opposite effect, putting a greater burden on lower earners as well as corporations and reducing the amount of revenue that the government took in.

The burden of the social security tax–a small tax on income meant to fund the Social Security System and provide a safety net for those unable or too old to work–fell hardest on low-income families since only the first $3,000 of income was subject to social security taxes.  Tax rates on the rich were increased as well, with the highest marginal tax rate raised “…to 79 percent, the highest in US history” (Folsom 128).  The increased tax rate added to the financial burden carried by the top earners in America, and, just as Andrew Mellon predicted in his aforementioned study, more capital was driven out of the economy and thus brought in little revenue for the federal government.  Ultimately, Roosevelt’s tax policy did little to balance the budget and instead brought about growing government expenditures, mostly on public works projects meant to provide relief for those left unemployed by the onset of The Great Depression.

Perhaps the most notable programs enacted under he New Deal were the previously mentioned public works projects and government agencies set up to rebuild the economy.  One of these agencies was the Agricultural Adjustment Administration (AAA), which was established in order to ensure that the farming industry remained solvent throughout the depression.  To do so, the AAA offered subsidies to farmers who were willing to destroy a percentage of their product and, in some cases, not grow their crops in the first place.  This procedure was meant to keep the price of crops high in order to keep those farmers in business as well as maintain high wages.  In practice, however, these high prices that were put in place strained an America that desperately needed inexpensive food.  As such, most middle class citizens could not afford these higher prices, and food shortages continued into the late 1930s.  These high wages also “…led to further job loss, particularly in manufacturing” (Cole para. 9).  Thus, government action to artificially induce higher prices and wages only served to add to unemployment “…because companies couldn’t afford to keep large payrolls at the rates set by the government” (Investopedia para. 10).

Public works projects proved to be failures as well; while many succeeded in creating jobs, the vast majority of these were temporary positions in construction projects, which would end within a few years.  Many public works projects also failed to increase production of valuable goods as most of the projects focused on rebuilding America’s infrastructure and creating jobs regardless of where those jobs were actually needed.  As a result, much of the additional work force was put into areas that did not greatly contribute to economic growth.  As Professor Joab Corey of Florida State University’s Department of Economics states, “If you pay half the unemployed people to dig holes and the other half to fill them up, everybody’s going to be employed, but nothing is going to be produced.”  President Roosevelt’s efforts to rebuild America’s economy through government intervention proved unsuccessful and in some cases counterproductive.

It was only when it became necessary to increase America’s defense due to the threat posed by Germany, its allies in Europe, and Japan that Roosevelt “…wanted lots of things made inexpensively, and pushed wages and prices below market levels” (Investopedia para. 11) so as to get businesses to produce goods to be used in war.  As a result, businesses had more capital to put toward production and, thus, fuel the economy.  With this increase in production, and the federal government purchasing more and more war goods, businesses were able to hire more workers and increase wages naturally as the market began to improve and unemployment dropped.  Later “when the war finished, the trade routes remained open and the post-war era went from recovery to a bull run in a few short years” (Investopedia para. 11) resulting in the economic prosperity America experienced in the late 1940s and continuing through the 1950s.

Ultimately, there was no failure of private business or free enterprise that created the Great Depression; the market simply went through the natural process of boom and bust inherent in any free economy.  It was government intervention meant to resolve the initial stock market crash in 1929 that actually produced the depression.  In spite of this reality, though, Franklin Roosevelt blamed his predecessor not for the actions Hoover took while in office, but rather for remaining inactive, which ran counter to the truth. By blaming Hoover’s policies, Roosevelt justified his initiation of a Keynesian economic recovery.  However, the policies enacted under Roosevelt’s New Deal were largely unsuccessful and often contributed to the prolonging of the depression.  Higher taxes and wages put a greater burden on business and led to high unemployment throughout the 1930s.

Although Roosevelt was able to bring unemployment down slightly with public works projects focused on infrastructure, the jobs created were temporary and did not increase production of valuable goods.  Only during World War II, when Roosevelt reversed several of the anti-business policies of the New Deal did the economy began to recover due to the surge in production caused by the increase in the amount of capital that businesses were allowed to keep and spend on additional product.  After the war, this change in policy remained, allowing international trade to flourish once again. In short, it was the government’s policy of intervening in the private sector rather than allowing the economy to right itself through natural market forces that was directly responsible for the onset of the Great Depression; the subsequent New Deal programs that perpetuated the decline brought on by the depression.

Works Cited

1. Cole, Harold L., and Lee E. Ohanian. “How Government Prolonged The Depression.” The Wall Street Journal 2 Feb. 2009: n. pag. Print.

2. Corey, Joab. Personal Interview. 6 November, 2013

3. De Rugy, Veronique. “1920s Income Tax Cuts Sparked Economic Growth and Raised     Federal Revenues.” Cato Institute. N.p., 4 Mar. 2003. Web. 16 Nov. 2013.

4. Folsom, Burton W., Jr. New Deal or Raw Deal? New York: Threshold Editions, 2008. Print.

5. Murray, Robert K. The Harding Era. University of Minnesota, 1969. Print.

6. Powell, Jim. FDR’s Folly: How Roosevelt and His New Deal Prolonged the Great Depression. New York: Crown Forum, 2003. Print.

7. Roosevelt, Franklin. “First Inaugural Address.” Washington DC. 4 March, 1933

8. Rothbard, Murray Newton. America’s Great Depression. Princeton, NJ: Van Nostrand, 1963. Print.

9. Smiley, Gene. Rethinking the Great Depression. Chicago: I.R. Dee, 2002. Print.

10. “Warren G. Harding.” The White House. Web. 26 Apr. 2012. <http://www.whitehouse.gov/about/presidents/warrenharding&gt;.

11. “What Caused The Great Depression?” Investopedia. N.p., 26 Feb. 2009. Web. 31 Oct. 2013.