Teaching Popular Cultural Semiotics

Jack SolomonJack Solomon is professor of English at California State University, Northridge, where he teaches literature and critical theory. He is often interviewed by the California media for analysis of current events and trends. He is co-author, with Sonia Maasik, of Signs of Life in the U.S.A.: Readings on Popular Culture for Writers and California Dreams and Realities: Readings for Critical Thinkers and Writers and is author of The Signs of Our Time and Discourse and Reference in the Nuclear Age.

The Whirled Cup

posted: 7.17.14 by Jack Solomon

With the World Cup standing as the globe’s most prominent popular cultural event of the moment, I think it is appropriate for me to take a cultural semiotic look at it, especially in the wake of all the commentary that has followed Brazil’s rather epic loss to Germany in the semi-finals.  As I write this blog, Holland is playing Argentina in the second semi-final, but since neither the outcome of that game nor the final to follow is of any significance from a semiotic point of view, I will not concern myself here with the ultimate outcome of the games but will focus instead on the non-player reactions to the entire phenomenon.

Let me first observe that while I am myself not a fan of the game that the rest of the world calls football (I’m not a fan of the game that Americans call football either), I am fully aware that to much of that world the prestige of the World Cup is roughly equaled by the value to us Americans of the World Series, the Super Bowl, the NCAA Final Four, the NBA finals, and the BCS championship combined. I have also been surprised to learn that the Olympic gold medal for football has hardly a fraction of the significance of the World Cup for the rest of the world, as signified by Argentina’s attitude towards Lionel Messi (currently the world’s greatest scorer, but perhaps the greatest of all time), who brought home Olympic gold in 2008 but is still regarded as a lesser man than Diego Maradona, who, in spite of a controversial career that boasts no Olympic gold medals, did bring home the Cup in 1986.  (Perhaps lesser “man” is the wrong term:  Argentines simply regard Maradona as “God”).

So I get the point that football is a very big deal in the rest of the world, so big that it may not be possible for most Americans to grasp just how big a deal it is.

Which takes me to the semiotic question: why is football such a big deal?  What is going on when a reporter from Brazilian newspaper O Tempo can remark, in the wake of the 1-7 defeat at the hands (or feet) of Germany:  ”It is the worst fail in Brazil’s history. No-one thought this possible. Not here. Not in Brazil.  People are already angry and embarrassed. In a moment like this, when so desperate, people can do anything because football means so much to people in Brazil”?

To answer this question I should perhaps begin by clearing the decks in noting that I don’t think that Ann Coulter has the answer.  I mean, American football, basketball, and baseball (our most passionately followed sports) are team sports too (Coulter appears to think that soccer-football is morally inferior because it is too team oriented and insufficiently individualistic, which is odd when one considers that names like Maradona, Pele, Bobby Charlton—and let’s throw in Georgie Best for good measure—are names in Argentina, Brazil, and Great Britain that are at least as magical as Babe Ruth, Joe Montana, and LeBron James are in America, and probably a lot more so).

So how can it be explained?  As always there is no single explanation: this question is highly overdetermined.  But let’s start with the sheer variety of sporting choices in America.  The list of easily available spectator and participant sports here is so long there really isn’t much point in trying to list them.  America has them all, and so the appeal of any given sport must always be taken in the context of a lot of other sports competing for attention (which is why Los Angeles, the second largest metropolitan market in America, can get along perfectly well year after year without an NFL franchise).  On the other hand, in much of the rest of the world while football isn’t precisely the only game in town, it is often practically so (let me except those African nations wherein long distance running is practically the only game in town: which is why Africans—in men’s competitions, not women’s—win most of the important marathons).  A game that doesn’t require much in the way of expensive equipment, football can be played by all classes, and of course offers a fantasy pathway to fame, glory, and riches for impoverished football dreamers.  In other words, for the rest of the world, football is the big basket into which nations put most of their sports eggs.

But who cares anyway?  Whether someone is carrying a ball over a line, kicking a ball into a net, throwing a ball into a basket, or hitting a ball onto the grass or into the bleachers (and so on and so forth), what difference does it make?  Why is Brazil in despair?  Why do people die at soccer-football games?  What gives with British soccer hooligans?

Here things get complicated.  Perhaps the most important point to raise is that sporting events have served as sublimated alternatives to war since ancient times.  The original Olympics, for example, featured events that were explicitly battle oriented—today’s javelin event at the modern Olympics recalls the days of spear throwing and a foot race run while carrying a shield—and the role of international sport in modern times continues to be that of a symbolic substitute for more lethal conflict (consider the passionate competitions between the USA and the USSR during the Cold War, with the 1972 Olympic basketball final and the 1980 hockey “miracle on ice” looming especially large in memory).  While I could go on much further here, suffice it to say that the significance of the World Cup is intimately tied up with nationalism and international conflict.  So when the Brazilian “side” fails to kick as many balls into a net as the German side, the emotional feel is akin to having lost a war.  This is not rational, but human beings are not invariably rational animals.  Signs and symbols can be quite as important as substantial things.

Americans right now are trying to get into the game when it comes to the passions of global football, but in spite of decades of youth football competition and legions of soccer moms, it really hasn’t happened yet.  All in all, American sport is still rather isolationist (I do not say this as a criticism): though we call the World Series, well, the World Series, only American teams play in that game, and the Super Bowl is only super on our shores.  But while there may be something parochial about our sporting attitude, at least it isn’t a matter for a national crisis if “our” team loses.  That’s not a bad thing.

Personally (and not semiotically), I believe that people should only get passionate about their own exercise programs (I feel awful if I miss a day of running), but, consistent with the mores of a consumer society, sport in America is increasingly a spectator affair, something to watch others do for us as a form of entertainment.  It isn’t good for the national waistline, but at least we aren’t in a state of existential angst because a handful of guys with tricky feet just lost in the semi-finals.

By the way: Argentina just went into the final.  Maybe Messi will be God.

(Alas.)

Comments: (1)
Categories: Jack Solomon
Read All Jack Solomon

The Beat Goes Off

posted: 7.3.14 by Jack Solomon

I confess to a certain fascination for the Beat generation.  Not because I belonged to it, mind you (I’m getting old but I’m not that old: the Beats belonged to my parents’ generation), but because of their profound influence on America’s cultural revolution, a revolution that continues to roil, and divide, Americans to this day.  In other words, if you want to understand what is happening in our society now, knowing something about the history of the Beats is a good place to start.

Please understand that when I say this, my purpose is semiotic, not celebratory.  In fact, as far as I am concerned, the Beats, and their Boomer descendants, all too often equated personal freedom with hedonistic pleasure, leading America not away from materialism (as the counterculture originally claimed to do) but to today’s brand-obsessed hyper-capitalistic consumerism.  What the Frankfurt School called “commodity fetishism” has morphed into what Thomas Frank has called the “commodification of dissent” (you can find his essay on the phenomenon in Chapter 1 of Signs of Life in the USA), wherein even anti-consumerist gestures are sold as fashionable commodities, while money and what it can buy dominate our social agenda and consciousness.

But what interests me for the purposes of this blog is the fate of three recent movies that brought the Beats to the big screen.  The first is Walter Salles’ production of Jack Kerouac’s signature Beat novel, On the Road (2012), a story that had been awaiting a cinematic treatment ever since Marlon Brando expressed an interest in it in 1957.  Another is John Krokides’ Kill Your Darlings (2013), a treatment of the real-life killing of David Kammerer by Lucien Carr—a seminal figure in the early days of the Beats and close friend of Allen Ginsberg, William Burroughs, and Jack Kerouac.  And the third is Big Sur (2013), a dramatization of Kerouac’s novel of the same title.

What is most interesting about these movies is their box office: though On the Road enjoyed a great deal of pre-release publicity and starred such high profile talent as Kristen Stewart, Viggo Mortensen, Kirsten Dunst, and Garrett Hedlund, its U.S. gross was $717,753, on an estimated budget of $25,000,000 (according to IMDb).  International proceeds were somewhat better (about eight and a half million dollars), but all in all, this was a major flop.

Kill Your Darlings did even worse.  Starring the likes of Daniel Radcliffe (as Allen Ginsberg?!) and Michael C. Hall, it grossed just $1,029,949, total (IMBd).

Big Sur, for its part, grossed .  .  .  wait for it .  .  . $33,621 (IMDb).  Even Kate Bosworth couldn’t save this one.

Can you spell “epic fail”?

As I ponder these high profile commercial failures, I am reminded of another recent literary-historical movie set in a similar era, which, in spite of an even higher level of star appeal, flopped at the box office: Steven Zaillian’s 2006 version of Robert Penn Warren’s classic novel All the King’s Men.  Resituating the action from the 1930s to the 1950s, and boasting an all-star cast including such luminaries as Sean Penn, Jude Law, Anthony Hopkins, Kate Winslet, Mark Ruffalo, and the late James Gandolfini, the movie grossed $7,221,458 on an estimated $55,000,000 budget (IMBd).

Now, it is always possible to explain commercial failures like these on aesthetics: that is, they simply could be badly executed movies.  And it is true that All the King’s Men got bad reviews, while On the Road‘s reception was somewhat mixed (Wikipedia).  Kill Your Darlings, on the other hand, actually did pretty well with the reviewers and won a few awards (again according to Wikipedia).  But the key statistic for me is the fact that Jackass Number Two was released in the same weekend as All the King’s Men and grossed $28.1 million dollars (Wikipedia), four times as much King’s, twenty-eight times as much as Darlings, and about forty times (US box office) as much as Road.  I don’t even want to calculate its relation to Big Sur.  So I don’t think that aesthetics explains these failures entirely.

Especially when one considers how just about any movie featuring superheroes, princesses, pirates, pandorans, malificents, and minions (not to mention zombies and vampires), draws in the real crowds.  Such movies have an appeal that goes well beyond the parents-with-children market and include a large number of the sort of viewers that one would expect to be interested in films starring Kristen Stewart, Daniel Radcliffe, and Jude Law.  But unlike the literary-historical dramas that failed, these successful films share not only a lot of special effects and spectacle but fantasy as well; and this, I think is the key to the picture.

Indeed, you have to go back to the 1970s to find an era when fantasy was not the dominant film genre at the American box office, and since the turn of the millennium fantasy has ruled virtually supreme.  While it is not impossible to attain commercial success with a serious drama (literary-historical or otherwise), it is very difficult.

The success of movies like Glory, The Butler, and The Help demonstrates that movies that tackle racial-historical themes resonate with American audiences, so I do not think that the failure of these Beat films can be attributed simply to America’s notorious disinterest in history.  And, after all, The Great Gatsby (2013 version) did well enough.  Perhaps it is nothing more than a disinterest in movies that are made by directors who are so personally enamored with their material that they forget that they have to work hard to make it just as attractive to audiences (I get this impression from some Amazon reviews of the DVD of Kill Your Darlings).  Artistic types tend to identify with the Beats (the original hipsters), but apparently today’s hipsters aren’t interested in hipster history.  Given the failure of On the Road, Kill Your Darlings and Big Sur (not to mention All the King’s Men), I would be surprised to see any future efforts in this direction, however.  If nothing else, today’s youth generation appears to be uninterested in the youthful experiences of their grandparents—spiritual and actual.  In all fairness, I suppose that one cannot blame them.

Comments: (0)
Categories: Jack Solomon
Read All Jack Solomon

Girls

posted: 5.22.14 by Jack Solomon

Having just read two large classes worth of student papers whose purpose is to semiotically analyze the HBO hit series Girls, I am learning a great deal about this popular program.  There are many striking things about the show that I could write about, but for the purposes of this brief blog I will choose only one:  the notoriously high level of nudity in Girls Indeed, as my students tell me, it appears that Hannah “gets naked” about twice per episode, and that this phenomenon is a much discussed feature of the show.  So, if you will, I will join the discussion here.

The debate over Hannah’s nudity seems to hinge on the physical appearance of the actress/writer who both portrays her and created her in the first place.  Similarly to the Dove “Real Beauty” campaign, defenders of Lena Dunham’s nakedness celebrate this display of an ordinary female body, countering complaints about it with the retort that no one seems to be complaining about all of the nudity in Game of Thrones—another highly popular show among Millenials featuring more conventionally beautiful actresses.  And so, from this perspective, Hannah’s nudity strikes a blow for women’s liberation.

I don’t think that this is all there is to the matter, however; for when we situate Girls in the system of contemporary television, we can see that there is a whole lot of such nudity and sexuality to be found—in Mad Men, for instance, wherein sexual threesomes (two women to a man, not the other way around)—as well as a lot of rape in shows like Sons of Anarchy, and, notoriously, Game of Thrones.  Indeed, reading my student papers is a bit of a jolting experience as I see the way they seem to take it for granted that of course television is going to be filled with rape scenes and not-so-soft porn.

The explanation (or excuse) for all this nudity, sexuality, and rape is often that it makes contemporary television more “realistic,” and, in an era where campus sexual assault has become a matter of national concern all the way up to the White House, this explanation is certainly true enough.  But there is a difference between a story that tells of such things and one that graphically shows it, and there, for me, lies the crux of the matter.

Let’s get back to Hannah’s nudity.  It generally occurs during decidedly unpleasant sex scenes, scenes in which Hannah is not only not experiencing much pleasure but is being humiliated in one way or another.  Marnie (another Girls regular) is also willing to humiliate herself sexually to hold on to her “boyfriend” Charlie.  Indeed, sex in Girls seems to bring almost nothing but humiliation, or worse.

This is quite different from shows like Friends, which bears a number of similarities to Girls.  In Friends, too, young people struggled to make it in New York during a down economy, and there was plenty of sexuality in that show too.  But the sex in Friends (while often rather puerile) was neither so explicit nor so painful as it so often is in Girls.  No, something has changed.  The sky has darkened.

Thus, I am unable to accept the Third Wave feminist argument that the sex and nudity in Girls (and contemporary television in general) is an expression of female empowerment, and that what counts is that women can choose what to do with their bodies.  There is simply too much of an appearance that such “choices” are really responses to what is expected of them.  Instead, I see something of a vicious circle: television shows that depict young women being sexually humiliated in order to satisfy their viewers’ demands for “realism,” while young women, seeing such humiliation on so many of their favorite programs, come to expect that in their lives and behave accordingly.  Art here doesn’t only reflect reality; it helps shape it.

Perhaps that is the most striking sign of all here: that it is a dismal time to be young in America, and the young know it.  Whether economically or romantically, the world, shows like Girls are saying, is off kilter.  Whether or not things are really that bad, the responses to Girls indicate that people think that they are, and enjoy the dark humor that the program delivers.  There are worse things than laughing at the darkness, however, and Girls, after all, is a comedy, sort of.

Comments: (1)
Categories: Jack Solomon
Read All Jack Solomon

Entrepreneurs in Toy Land

posted: 5.9.14 by Jack Solomon

A brief news item in the Chronicle of Higher Education reports that 89%  of those business leaders polled in a Northeastern University survey believe that “colleges should increase teaching about entrepreneurship”.  Given the fact that such corporate thinking has come to dominate current discourse on higher education and its purposes, it is worthy of a semiotic analysis, and I will sketch one out accordingly here.

First let’s be clear on the meaning of the word “entrepreneurship” itself so there isn’t any confusion.  Entrepreneurship is the defining quality of an entrepreneur, and an entrepreneur is someone who founds and directs new business ventures.  Of course, when thinking of such people, names like Steve Jobs, Bill Gates, Elon Musk, and Mark Zuckerberg always come to mind, and that, presumably, is what the business leaders surveyed in the Northeastern poll have in mind.

Fine, I do not mean to challenge this ethos of corporate creativity.  It is consistent with a number of traditional American mythologies, including our valuing of individualism, self-reliance, and what used to be called the Protestant Work Ethic.  But the problem lies in two major contradictions that the survey does not mention, contradictions that become clear when we look at the larger cultural system within which entrepreneurship functions.

The first contradiction lies in the fact that even as the entrepreneur is celebrated in corporate and educational discourse, the reality is that contemporary capitalism is becoming increasingly monopolistic.  For every successful entrepreneur there are countless entrepreneurs whose efforts have been wiped out by the giant companies that have already made it (just consider what Microsoft did to Netscape, or what Facebook did to MySpace, or, for that matter, what Sebastian Thrun once predicted about the fate of American universities in the era of the MOOC).  As the rules that were written in the Progressive Era to stem the monopoly capitalism of the late nineteenth century are loosened ever further (just look at the current controversy over FCC rules for the regulation of Internet ISP providers for an example of this trend), the odds against successful entrepreneurship are lengthening.  So for currently successful business leaders to urge today’s students to be more entrepreneurial seems more than a little problematic.  It’s like urging students to pursue an information technology education and then sending a significant portion of our IT jobs offshore.  This is a contradiction so deep that it could be called a betrayal.

But, as I say, there is a second contradiction.  Let’s recall that an entrepreneur is a hard-working, self-reliant individualist.  But at the very same time that American business leaders are calling for more entrepreneurial education, it is they, bolstered by billions of advertising and marketing dollars, who have created a society of passive consumerism and pleasure seeking hedonism.  Those of us in education who would like to see our students work hard and think critically are swimming upstream against an always-on entertainment society wherein instant gratification and 24/7-join-the-crowd social networking are significant obstacles to student success.  You can’t work effectively and individually when you are constantly sending selfies to Instagram, listening to music, texting, updating your Facebook page, downloading TV programs, doing some online shopping, tweeting, “following,” “friending” and otherwise multitasking on your smart phone.  But that is exactly what American business is working so hard to get our students to do.

So what I, as an educator, want to say to the business leaders who want me to teach entrepreneurship is, “please get out of my way.”  Stop pushing my students to believe that the instant gratification of every pleasure is far more important than time spent in study and personal effort.  If, as another part of the same Northeastern survey reports, fifty-four percent of the same business leaders believe that “the American higher-education system is falling behind developing and emerging countries in preparing students for the work force,” then it is time for those leaders to clean up their own house and stop treating students as consumers.

But I have no expectation at all that anything of the sort will happen.  After all, their own entrepreneurial success is grounded in wiping out the competition and treating human beings as markets: in short, in destroying the conditions that foster entrepreneurship.

Comments: (0)
Categories: Jack Solomon
Read All Jack Solomon

The Colbert Report

posted: 4.24.14 by Jack Solomon

How do you know when an entertainment event is a cultural signifier?

Easy: it’s when Rush Limbaugh asserts that “it has just declared war on the heartland of America.”

That, anyway, is what Limbaugh has been widely reported as saying upon word that CBS has hired Stephen Colbert to replace Dave Letterman on CBS’s Late Show And while Limbaugh’s declaration may be the most spectacular of responses to Colbert’s hiring, it is but one of a virtually endless stream of comments about what is, from one perspective, merely a corporate personnel decision on the part of CBS.  But when such a decision gets this sort of attention, you can be quite confident that it is more than it appears: it is, in short, a sign, and therefore worthy of a semiotic analysis.

In conducting such an analysis, we don’t need to judge either Colbert or any particular response to his new television role.  The point is to analyze the significance both of his hiring and of the reaction to it.  And, as always, we need to begin with some contextualization.

Let’s look first at the system of late-night talk show hosts.  There have been quite a lot of them, but none looms larger than the late Johnnie Carson.  As the now legendary host of The Tonight Show, Carson turned that program—and late night TV in general—into an institution, and what that institution once represented in the Carson era (I’m thinking here of the 1960s) was what I can best describe as a kind of laid-back Eisenhower-Republican ethos: a mild-mannered, good-humored Middle Americanism that, in the current political climate, would probably be denounced as being “socialistic.”  Carson himself wasn’t political—at least not in any overtly partisan fashion—but in an era when a cultural revolution was convulsing the nation, his cheerfully bland monologues, banter, and let’s-kick-everything-off-with-a-golf-swing manner was a kind of haven for Middle American adults: the parents of those hell-raising baby boomers who regarded The Tonight Show as just another “plastic” signifier of an “Establishment” that they wished either to escape or transform.

That’s why it was so significant that, when Carson retired, such edgier hosts as Jay Leno (who got Carson’s job) and David Letterman (who essentially took Carson’s place, though on a different channel) took over the late night watch.  These hosts—especially Letterman—were chosen precisely because they appealed to those edgier young baby boomers who, during the 1970s, had ceased to disdain late night talk shows and had become a coveted audience for it.  Though hardly an earth shaking development, the ascendancy of Letterman was but one of many signifiers by the early 1980s of a certain mainstreaming of the cultural revolution.  The conservative backlash to the 1960s may have captured the White House through twelve years of Reagan-Bush, but within popular culture, at least, one of the last bastions of Middle Americanism had fallen, so to speak, to the left.

Today’s choice of an even edgier entertainer—in fact, America’s most wickedly funny satirist of right-wing media—to replace Letterman thus signifies an intensification of the trend.  The Colbert Report is one of the current youth generation’s favorite television programs, and CBS’s choice of Colbert among so many other powerful contenders (Tina Fey, anyone? Amy Poehler?) most certainly appears intended to capture the millennials as an audience for its post-Letterman Late Show.

I think that Limbaugh, then, is wrong to attribute either cultural or political motivations to CBS’s choice.  CBS’s motivations, like any motivation in a commercial popular culture, are simple: the company wants to make money, and it has judged that by appealing to millennials it will do just that.  In other words, CBS isn’t “declaring war” on anyone.  Still, I would agree with Limbaugh (did I just say that?!) that the choice of Colbert has political significance.  After all, Colbert’s whole shtick lies in a very partisan (much more partisan than Letterman, and even more so than Jon Stewart) ridiculing of some of the icons of conservative media, and that’s political.

For me, the fact that CBS apparently feels that it can safely choose as potentially divisive an entertainer as Stephen Colbert is to rule the castle that Carson built (albeit on another station) is what is most significant here.  The cultural revolution has rolled along to a new stage.  In fact, it may well one of the most potent signs today of the changing demography that has been worrying Republican strategists as they seek ways of recapturing the White House.  Whether or not the Colbert-hosted Late Show becomes a popular hit remains to be seen (as I have hinted above, a woman host might have been a better idea, and, after all, there is still a sizable Middle American component to the audience for late night talk TV and it might not take to Colbert), but the significance of this choice will remain.  The Democrats, so to speak, have won this round, and they never even had to enter the ring.

So hail to the new satirist-in-chief.  I wonder how long his term will run.

Comments: (0)
Categories: Jack Solomon
Read All Jack Solomon

Semiotics vs. Semiology

posted: 4.10.14 by Jack Solomon

The theme of this blog, as well as Signs of Life in the U.S.A., is, of course, the practice of the semiotic analysis of popular culture in the composition classroom and in any course devoted to popular cultural study.  But it is worth noting that my choice of the word “semiotics,” rather than “semiology,” is grounded in a meaningful distinction.  For while the words “semiotics” and “semiology” are often interchangeable (they both concern the analysis of signs), there is a technical distinction between them that I’d like to explain here.

To begin with, “semiology” is the study first proposed by Ferdinand de Saussure, and came to be developed further into what we know today as structuralism.  “Semiotics,” on the other hand, is the term Charles Sanders Peirce coined (based on the existing Greek word “semiotikos”) to label his studies.  But the difference is not simply one of terminology, because the two words refer to significantly different theories of the sign.

Semiology, for its part—especially as it evolved into structuralism—is ultimately formalistic, taking signs (linguistic or otherwise) as being the products of formal relationships between the elements of a semiological system.  The key relationship is that of difference, or as Saussure put it, “in a language system there are only differences without positive terms.”   The effect of this principle is to exclude anything outside the system in the formation of signs: signs don’t refer to extra-semiological realities but instead are constituted intra-semiologically through their relations to other signs within a given system.  Often called “the circle of signs” (or even, after Heidegger, ‘the prison house of language”), sign systems, as so conceived, constitute reality rather than discover or signify it.  It is on this basis that poststructuralism—from Derridean deconstruction to Baudrillardian social semiology to Foucaultian social constructivism—approaches reality: that is, as something always already mediated by signs.  Reality, accordingly, effectively evaporates, leaving only the circle of signs.

Semiotics, in Peirce’s view, is quite different, because it attempts to bring in an extra-semiotic reality that “grounds” sign systems (indeed, one of Peirce’s terms for this extra-semiotic reality is “ground”).  Peirce was no naïve realist, and he never proposes that we can (to borrow a phrase from J. Hillis Miller) “step barefoot into reality,” but he did believe that our sign systems not only incorporate our ever-growing knowledge of reality but also can give access to reality (he uses the homely example of an apple pie recipe as a sequence of semiotic instructions that, if followed carefully, can produce a real apple pie that is not simply a sign).

For me, then, Peircean “semiotics” brings to the table a reality that Saussurean/structuralist/poststructuralist “semiology” does not, and since, in the end, I view the value of practicing popular cultural semiotics as lying precisely in the way that that practice can reveal to us actual realities, I prefer Peirce’s point of view, and, hence, his word.  But that doesn’t mean I throw semiology out the window.  As readers of this blog may note, I always identify the principle of difference as essential to a popular cultural semiotic analysis: that principle comes from semiology.  For me, it is a “blindness and insight” matter.  Saussure had a crucial insight about the role of difference in semiotic analysis, but leaves us blind with respect to reality.  Peirce lets us have reality, but doesn’t note the role of difference as cogently as Saussure.  So, in taking what is most useful from both pioneers of the modern study of signs, we allow the two to complement each other, filling in one’s blindness with the other’s insight, and vice versa.

Add to this the fact that Peirce has a much clearer role for history to play in his theory of the sign than Saussure (and his legacy) has, and the need for such complementarity becomes even more urgent.  And finally, when we bring Roland Barthes’ ideological approach to the sign (he called it “mythology”) into the picture, we fill in yet another gap to be found in both Saussure and Peirce.  Put it all together—Peircean reality and history, Saussurean difference, and Barthesian mythology—and you get the semiotic method as I practice it.

And it works.

Comments: (0)
Categories: Jack Solomon
Read All Jack Solomon

How Not To Do Popular Cultural Semiotics

posted: 3.27.14 by Jack Solomon

Back in December 2013 I wrote a complete Bits blog entry on the then just released Disney animated film “Frozen.”  Briefly touching upon the fact that, like the Marvel superhero Thor, here was another popular cultural phenomenon featuring archetypally  “white” characters—look at those gigantic blue eyes, those tiny pointed noses, the long ash blonde hair of one of the princesses (the other is a redhead) and the blonde mountain man .  .  . you get the picture—I focused on the continuing phenomenon of a bourgeois culture producing feudal popular art: you know, princesses in their kingdoms, princes, that sort of thing.

But I never posted it, and wrote something else instead.

Why?  Well, it’s always possible to overdo a good thing.  I figured that perhaps, as Christmas was approaching, whatever readers I may have here would not be thrilled with a political analysis of a seasonal fairy tale film.  While, semiotically speaking, nothing is ever just an entertainment, sometimes a semiotic analysis can feel just a bit heavy handed—or rather more than a bit.  So I let it go.

So picture my surprise when I encountered a national news story that is circulating in the wake of the recent Academy Awards ceremony.  It appears that “Frozen” is not only an Oscar winner; no, according to a blogger and at least one conservative radio host, “Frozen” is a devious example of a “gay agenda” to turn American children into homosexuals.  Worse yet, it also promotes bestiality.

Say what?

Let’s start with the bestiality part.  You see, concerned Americans don’t like the friendship between Kristoff (the mountain man) and Sven (his reindeer).  Well, um, OK, but if you think that that is coded bestiality, then you’re going have to give up on America’s most red-blooded story type of all: the Western.  I mean the old joke used to go that at the end of the typical Western the cowboy hero kissed his horse and not the girl, but we weren’t supposed to take that literally.

But what about the “gay agenda” thing?  Well, it goes something like this: Elsa, the princess with secret powers, isn’t very popular, and she doesn’t have a boy friend.  Obviously, then, her powers are a metaphor for her homosexuality.  Then, the fact that her sister princess (the redhead) is forced into a marriage she doesn’t want, is clearly an attack on heterosexual marriage.  And, finally, the popularity of Elsa in the happy ending of the movie is blatantly a message to America to embrace its erstwhile ostracized homosexuals.

[Insert forehead slap here].

I’m sorry, but this is not a good semiotic analysis.  Semiotic analyses do not seek out hidden allegories without textual support.  They begin with a precise sense of the denotation of the sign, what exactly one is observing, and move to what such denotations may signify.  In this sense, if I was to pursue my earlier analysis of the film, the princesses are white; their features are stylizations of characteristically northern European appearance.  They are princesses; they do live in a “kingdom.”  These are medieval phenomena, and the question then becomes what do such manifest facts culturally connote in what is a bourgeois society transitioning away from having a Caucasian majority?  Whatever answers one gives to such questions must be abductive:  that is, in C.S. Peirce’s sense of the term, they must constitute the most likely interpretations of the signs.

When an interpretation gets into wildly unlikely interpretations of what isn’t remotely denotatively present, there’s bound to be trouble.  And when one piece of “evidence” offered in support of the “gay agenda” thesis is that “the Devil” may have purchased the Disney Corporation in order to corrupt America’s children, um (I know I am using this pseudo-word a lot here, but, um, well, what else can one say in this overly sensitive world?), you really know that you’ve got semiotic Trouble with a capital “T”.

I know that we have been here before, that Fredric Wertham’s 1954 Seduction of the Innocent accused America’s comic book writers of trying to turn American boys into homosexuals (Batman and Robin, get it?), but to see this in 2014 .  .  .  ?

Wait a minute: here is our cultural signifier for the day.  When people are, with apparent seriousness, reviving Cold War style, McCarthyite attacks on popular culture (that’s the denotation of the sign here), it is a reasonable interpretation that such people are, well, reviving Cold War era McCarthyite politics.  When you situate this “gay agenda” interpretation of “Frozen” into a cultural system that includes Arizona’s recent attempt to make discrimination against gays legal on “religious” grounds, not to mention the Chick-fil-A controversy, the Duck Dynasty controversy, and all the anti-gay marriage referenda that have been passed, this is quite a likely abduction.  After all, in such a world gays are the new “communists”.

Maybe it’s time to start playing Bob Dylan’s “Talkin John Birch Paranoid Blues” again.  I need some comic relief.

Comments: (0)
Categories: Jack Solomon
Read All Jack Solomon

The Popular Art of Dystopia

posted: 3.14.14 by Jack Solomon

I’ve recently had occasion to participate in some classroom discussions of two famous dystopian stories: Shirley Jackson’s “The Lottery” and Suzanne Collins’ Hunger Games trilogy.  Well of course, how could one not discuss Jackson’s classic in a contemporary literature class without invoking Collins’ work, and, conversely, how could one not discuss Collins without citing Jackson’s chilling predecessor?  But as I contemplated these two stories I realized that in spite of all that they have in common—after all, they are both visions of societies that, in effect, practice human sacrifice—there is a crucial difference between them, a difference that can help us, tentatively and incompletely, identify at least one distinction between “high art” and “popular art”.

I’m not referring here to the fact that the Hunger Games trilogy is an infinitely more complex tale than is “The Lottery,” written in the tradition of fantasy story telling while unfolding a vast allegory of a socioeconomically unequal America that is devouring its own children, though of course there are those differences.  I am referring instead to something very simple, very basic, something that is so obvious than when I asked students to identify that difference they seemed puzzled and couldn’t seem to grasp what I was getting at.  So here it is: the endings of the two stories, how they come out.  Now, those of us who have been trained in the close reading of literature may forget sometimes just how crucial the ending of a story is, but for the ordinary, mass reader, as it were, it is essential, and it is the difference in the endings of  ”The Lottery” and the Hunger Games trilogy that I want to explore here.

Let’s begin with the Hunger Games trilogy.  Though it takes three large novels to do so, and there is much suffering, death, and destruction along the way (not to mention betrayal and moral ambiguity), in the end the tyrannical society of Panem is overthrown in a popular rebellion.  Not only that, but Katniss, the heroine of the trilogy, lives to marry Peeta and look back on the triumphant (if traumatizing) life that she has led.  It hasn’t been easy, and there has been some collateral damage, but the bad guys lose, the good guys win, and, all in all, there’s a happy ending.

Compare this to “The Lottery.”  It has a female protagonist (sort of), too: Tessie Hutchinson.  But while Tessie is certainly to be pitied, she is hardly someone to identify with, and even less a heroine who can bring hope to a hopeless situation.  Content to go along with the hideous ritual of her society until she becomes its victim, Tessie isn’t even a good martyr, and her death at the end does not lead to a rebellion.  With the chilling conclusion of the tale we can be certain that next year “the lottery” will be held again.

And there you have it: while I would not presume to explicate all of the potential readings of this magnificent story, I dare say that we can say that it is a story that presents us with something horrible not only in the human condition but within human nature itself.  Written in 1948, “The Lottery” had behind it the only-too-true history of the Holocaust, which makes it far more than an allegorical critique of mere social “conformity”.  And, not too surprisingly, the original response to the story was rather negative, because, unlike the Hunger Games trilogy,” there is nothing to cling to here: no plucky heroine, no rebellion, no victory in the end over evil, no happy ending .  .  .  nothing but pure bleakness.

 

Which takes me to my point.  For while the difference between “high art” literature and popular literature is historically contingent, fluid, and indeterminate, whenever I am asked for (or feel the need to propose) a way of distinguishing between “high art” and “popular art”, I suggest that high art gives what we need, while popular art gives us what we want.  A commodity for sale, popular art must offer its purchaser something desired, and pleasure is usually what is wanted.  It is a pleasure to see Katniss survive (along with the boy who will become her husband in the end); it is a pleasure to see the tyrants of Panem fall; it is a pleasure to identify with Katniss (or Frodo, or Harry Potter, or Batman, or any fantasy hero who, one way or another, defeats evil in the end).  But reality doesn’t work out that way, and, corny as this may sound, we need artists to tell us that.  Because when we succumb to the fantasy that we have paid for, the vision of the happy ending that makes us feel good, we are all the less likely to try do anything about the evils that make us feel bad.  This is why it matters that “high art” literature is being pushed aside in favor of popular literature in the literary marketplace, because while we all need to be entertained, we need to see the truth some time as well.

Comments: (3)
Categories: Jack Solomon
Read All Jack Solomon

Balancing Mythologies

posted: 2.27.14 by Jack Solomon

In a recent classroom discussion concerning the extraordinary attraction of digital social networking, and the possible significance of that attraction, one of my students (among many astute observations made throughout the class) noted that there was something about social networking that suggested that people felt that their personal experience wasn’t valid somehow unless it could be shared on Instagram, Snapchat, Pinstagram, etc.  This is a strikingly significant observation, and I would like to pursue it further here.

That large numbers of people in a media-saturated era should feel the need to share and broadcast their experiences in order to “authenticate” (or more fully realize) them, is not at all surprising.  After all, with the advent of the cinema—and the celebrity system that accompanied it—a century or so ago, the prospect of having one’s being expanded, both literally and figuratively, on a big screen became one of the key attractions of mass culture.  “Celluloid heroes never really die,” as Ray Davies has put it, and their lives take on dimensions that transcend those of ordinary folk.

But with the advent of social media, anyone can broadcast oneself—can, that is to say, become a subject of the mass media, and while a hundred “friends” on Facebook” and a handful of “followers” on Twitter doth not a celebrity make, the feel of mass media fame is there for the taking, and hundreds of millions of people have jumped right in and taken it.

There is clearly something intoxicating, and even addictive, about living one’s life online, in posting oneself through an image or a tweet or a comment and eagerly awaiting the response.  I believe that this desire to be acknowledged, to have one’s experience validated, as it were, is a key part of the attraction of social networking.  It is a very basic human need and is a central component of that social characteristic called “hetero-directedness.”  To be hetero-directed is to live your life in relation to what others think about you.  Children and adolescents are especially hetero-directed, but so too are adults ambitious for fame or who purchase things according to their status value (what Marx calls “commodity fetishism” is a form of hetero-directedness).

When we look at American history, we can find prominent examples of hetero-directedness, especially among the Congregationalists (better known as the Puritans) who settled the New England colonies.  For the Congregationalists life was lived not only in relation to their God but also in relation to everyone else within the congregation.  Indeed, it was the responsibility of every Puritan to demonstrate to others the signs of their salvation in order to be admitted into the congregation. In more recent times, the intense pressure for social conformity in the 1950s can also be described as an especially hetero-directed era.

With such a history, one might say that hetero-directedness is an American mythology, a social value, and that the advent of digitally-enabled social networking is raising that mythology to new prominence in the era of the global “hive.”  But as with so many American mythologies, there is a contrary tradition in our history—one that we can associate with such voices as Emerson’s and Thoreau’s—which values individualism and self-reliance, and that mythology appears to be declining in relation to the resurgence of the mythology of hetero-directedness.

Is this something that we should care about?  Well, of course, the answer depends upon one’s own ideological inclinations.  Both hetero-directedness and individualism have their attractions, and both have their problems.  A hetero-directed society, for instance, can be a socially responsible one, a society where people care for and take care of each other.  But it can also be a place of compulsory conformity governed by a tyranny of the majority.  Indeed, as actress Ellen Page put it in a recent speech, hetero-directedness— living too much according to the expectations (and judgments) of others, can lead to a loss of self and authenticity.

An individualistic society, on the other hand, can be a site of freedom and opportunity, but it can also devolve into anti-social anarchy and even socio-pathology if taken too far.  There are plenty of signs of the latter in the current environment, and they are no less a concern than the specter compulsory conformity.

So, we have two conflicting mythologies.  Are we compelled to choose simply one or the other?  In my own ideological view, the answer is “no,” because there is another American mythological tradition that is often forgotten in these highly polarized times.  This is what could be called the “mythology of the middle,” the belief, voiced in the eighteenth century by St. John de Crevecoeur, that America is a land where the extremes have been flattened out, where people sought economic “competence,” not luxury, and where the ethnic, religious, and class differences that polarized societies elsewhere were reconciled here in the shaping of a new identity, that of the American.

Crevecoeur’s belief, of course, like so many cultural mythologies, clashed with the realities of his times.  It was a goal, not an accurate description of America.  But as a goal it offers a highly worthy mythology for our fractured times, and it can be applied to the conflicting visions of individualism and hetero-directedness.  Maintained in a dynamic balance, the two can complement each other, accenting what each has to offer while muting their dangers when taken to extremes.  Put it this way: it’s fine to post something up on Instagram now and then, but if you can’t take a walk by yourself in the woods without your smart phone, busily posting selfies while exchanging tweets and text messages, perhaps it’s time to think deeply about what you are doing.

Comments: (0)
Categories: Jack Solomon
Read All Jack Solomon

Of Puppies and Paradoxes

posted: 2.13.14 by Jack Solomon

In my last blog I discussed the difference between a formalist semiotic analysis and cultural one.  In this blog I would like to make that discussion more concrete by looking at one of the most popular ads broadcast during Super Bowl XLVIII.  Yup, “Puppy Love.”

Let’s begin with the formal semiotics.  This is an ad with a narrative, but the narrative is conducted via visual images and a popular song voice over rather than through a verbal text.  The images are formal signs that tell us a story about a working horse ranch that is also a permanent source of puppies up for adoption—as signified by the carved sign permanently placed in front of a ranch house reading “Warm Springs Puppy Adoption.”  It is also important to note that while the ad could be denoting a dog rescue operation, the fact that we see a pen full of nothing but Golden Retriever puppies who are all of the same age suggests that it is more likely that the young couple who run the ranch and the puppy adoption facility are Golden Retriever breeders.  We’ll get back to this shortly.

The visual narrative informs us, quite clearly, that one of the puppies is close friends with one of the Clydesdale horses on the ranch, and that he is unhappy when he (or she, of course) is adopted and taken away from the ranch.  We see a series of images of the puppy escaping from his (or her) new home by digging under fences and such and returning to the ranch.  After one such escape, the Clydesdales themselves band together to prevent the return of the puppy back to his adoptive home, and the final images show the puppy triumphantly restored to his rightful place with his friend on the ranch.

It’s a heartwarming ad with a happy ending that is intended to pull at our heartstrings.  And that leads us to our first, and most obvious, cultural semiotic interpretation of the ad.  The ad assumes (and this is a good thing) a tender heartedness in its audience/market towards animals—especially puppies and horses.  It assumes that the audience will be saddened by the puppy’s unhappiness in being separated from his Clydesdale buddy, and will be elated when the puppy, together with Clydesdale assistance, is permanently reunited with his friend.  Of course, audience association of this elation with a group of Clydesdales (Budweiser’s most enduring animal mascot) will lead (the ad’s sponsors hope) to the consumption of Budweiser products.

So, what’s not to like?  The first level of cultural semiotic interpretation here reveals that America is a land where it can be assumed that there are enough animal lovers that a sentimental mass market commercial designed for America’s largest mass media audience of the year will be successful.  Heck, (to reverse the old W.C. Fields quip) any country that likes puppies and horses can’t be all bad.

But there is more to it than that.  As I watch this ad I cannot help but associate it with a movie that was made in 2009 called Hachi: A Dog’s Tale.  The movie was directed by an internationally famous director (Lasse Hallstrom) and starred actors no less than Richard Gere and Joan Allen (with a sort of cameo played by Jason Alexander).  And it was never released to U.S. theaters.

Yes, that’s right.  While Hachi: A Dog’s Tale was released internationally, received decent reviews, and even made a respectable amount of money, this Richard Gere movie has only been accessible to American audiences through DVD sales.  With talent like that at the top of the bill, what happened?  Why wasn’t it released to the theaters?

Well, you see, the movie is based on a true story that unfolded in Japan before the Second World War.  It is the story of an Akita whose master died one day while lecturing at his university post and so never returned to the train station where the Akita had always greeted him upon returning home.  The dog continued to return to the train station most (if not every) evening for about ten years, sometimes escaping from his new owners in order to do so.  He finally was found dead in the streets.

Hachiko, the original name of the dog, is a culture hero in Japan, and there is a statue of him at the train station where he kept vigil for ten years.  A movie about him was made in Japan in 1987, and while the U.S. version is Americanized, it is pretty faithful to the original story and to the Japanese film.

Which probably explains while it never was released for U.S. theatrical distribution.  I mean, the thing is absolutely heartbreaking.  Have a look at the comments sections following the YouTube clips of the movie, or the Amazon reviews of the DVD: almost everyone says the same thing: how they weep uncontrollably whenever they watch the thing.  It is significant that the DVD cover for the movie makes it look like a warm and fuzzy “man’s best friend” flick that children and Richard Gere fans alike can love.  Yes, it’s a rather good movie (the music is extraordinary), but warm and fuzzy it ain’t.

And this takes us to the next level of interpretation of “Puppy Love.”  Like Hachi, the puppy in the ad doesn’t want to be separated from someone he loves.  But unlike Hachi, the puppy is happily reunited with his friend in the end.  His tale is a happy one—and an unrealistic one.  It is a wrenching experience for all puppies to be adopted away from their families (which are their initial packs), but they don’t tend to be allowed to go back.  And animals are permanently separated from the people whom they love (and who loved them) all the time due to various circumstances which can never be explained to them.  This is what makes Hachi: A Dog’s Tale so heartrending: it reveals a reality that it is not comfortable to think about:  evidently this was too much reality for theatrical release.

So “Puppy Love” distracts us from some uncomfortable realities, including the fact that puppies are bred all the time as commodities who will be separated from their original homes (that’s why the fact that the “Puppy Adoption” sign in the ad seems to indicate a breeding operation is significant) and have their hearts broken.  The ad makes us feel otherwise: that everything is OK.  This is what entertainment does, and that is what is meant by saying that entertainment is “distracting.”  But feeling good about puppy mills isn’t good for puppies.  And feeling good about the many hard realities of life can lessen audience desire to do something about those realities.

And that takes us to a broader cultural-semiotic interpretation:  as Max Horkheimer and Theodore Adorno suggested over half a century ago, the American entertainment industry has been working for many years to distract its audience from the unpleasant realities of their lives, thus conditioning them to accept those realities.  Horkheimer and Adorno have gone out of fashion in recent years, but I still think that they have a lot to tell us about just why Americans continue to accept a status quo that is anything but heart warming.

Comments: (3)
Categories: Jack Solomon
Read All Jack Solomon