Teaching Popular Cultural Semiotics

Jack SolomonJack Solomon is professor of English at California State University, Northridge, where he teaches literature and critical theory. He is often interviewed by the California media for analysis of current events and trends. He is co-author, with Sonia Maasik, of Signs of Life in the U.S.A.: Readings on Popular Culture for Writers and California Dreams and Realities: Readings for Critical Thinkers and Writers and is author of The Signs of Our Time and Discourse and Reference in the Nuclear Age.

Semiotics vs. Semiology

posted: 4.10.14 by Jack Solomon

The theme of this blog, as well as Signs of Life in the U.S.A., is, of course, the practice of the semiotic analysis of popular culture in the composition classroom and in any course devoted to popular cultural study.  But it is worth noting that my choice of the word “semiotics,” rather than “semiology,” is grounded in a meaningful distinction.  For while the words “semiotics” and “semiology” are often interchangeable (they both concern the analysis of signs), there is a technical distinction between them that I’d like to explain here.

To begin with, “semiology” is the study first proposed by Ferdinand de Saussure, and came to be developed further into what we know today as structuralism.  “Semiotics,” on the other hand, is the term Charles Sanders Peirce coined (based on the existing Greek word “semiotikos”) to label his studies.  But the difference is not simply one of terminology, because the two words refer to significantly different theories of the sign.

Semiology, for its part—especially as it evolved into structuralism—is ultimately formalistic, taking signs (linguistic or otherwise) as being the products of formal relationships between the elements of a semiological system.  The key relationship is that of difference, or as Saussure put it, “in a language system there are only differences without positive terms.”   The effect of this principle is to exclude anything outside the system in the formation of signs: signs don’t refer to extra-semiological realities but instead are constituted intra-semiologically through their relations to other signs within a given system.  Often called “the circle of signs” (or even, after Heidegger, ‘the prison house of language”), sign systems, as so conceived, constitute reality rather than discover or signify it.  It is on this basis that poststructuralism—from Derridean deconstruction to Baudrillardian social semiology to Foucaultian social constructivism—approaches reality: that is, as something always already mediated by signs.  Reality, accordingly, effectively evaporates, leaving only the circle of signs.

Semiotics, in Peirce’s view, is quite different, because it attempts to bring in an extra-semiotic reality that “grounds” sign systems (indeed, one of Peirce’s terms for this extra-semiotic reality is “ground”).  Peirce was no naïve realist, and he never proposes that we can (to borrow a phrase from J. Hillis Miller) “step barefoot into reality,” but he did believe that our sign systems not only incorporate our ever-growing knowledge of reality but also can give access to reality (he uses the homely example of an apple pie recipe as a sequence of semiotic instructions that, if followed carefully, can produce a real apple pie that is not simply a sign).

For me, then, Peircean “semiotics” brings to the table a reality that Saussurean/structuralist/poststructuralist “semiology” does not, and since, in the end, I view the value of practicing popular cultural semiotics as lying precisely in the way that that practice can reveal to us actual realities, I prefer Peirce’s point of view, and, hence, his word.  But that doesn’t mean I throw semiology out the window.  As readers of this blog may note, I always identify the principle of difference as essential to a popular cultural semiotic analysis: that principle comes from semiology.  For me, it is a “blindness and insight” matter.  Saussure had a crucial insight about the role of difference in semiotic analysis, but leaves us blind with respect to reality.  Peirce lets us have reality, but doesn’t note the role of difference as cogently as Saussure.  So, in taking what is most useful from both pioneers of the modern study of signs, we allow the two to complement each other, filling in one’s blindness with the other’s insight, and vice versa.

Add to this the fact that Peirce has a much clearer role for history to play in his theory of the sign than Saussure (and his legacy) has, and the need for such complementarity becomes even more urgent.  And finally, when we bring Roland Barthes’ ideological approach to the sign (he called it “mythology”) into the picture, we fill in yet another gap to be found in both Saussure and Peirce.  Put it all together—Peircean reality and history, Saussurean difference, and Barthesian mythology—and you get the semiotic method as I practice it.

And it works.

Comments: (0)
Categories: Jack Solomon
Read All Jack Solomon

How Not To Do Popular Cultural Semiotics

posted: 3.27.14 by Jack Solomon

Back in December 2013 I wrote a complete Bits blog entry on the then just released Disney animated film “Frozen.”  Briefly touching upon the fact that, like the Marvel superhero Thor, here was another popular cultural phenomenon featuring archetypally  “white” characters—look at those gigantic blue eyes, those tiny pointed noses, the long ash blonde hair of one of the princesses (the other is a redhead) and the blonde mountain man .  .  . you get the picture—I focused on the continuing phenomenon of a bourgeois culture producing feudal popular art: you know, princesses in their kingdoms, princes, that sort of thing.

But I never posted it, and wrote something else instead.

Why?  Well, it’s always possible to overdo a good thing.  I figured that perhaps, as Christmas was approaching, whatever readers I may have here would not be thrilled with a political analysis of a seasonal fairy tale film.  While, semiotically speaking, nothing is ever just an entertainment, sometimes a semiotic analysis can feel just a bit heavy handed—or rather more than a bit.  So I let it go.

So picture my surprise when I encountered a national news story that is circulating in the wake of the recent Academy Awards ceremony.  It appears that “Frozen” is not only an Oscar winner; no, according to a blogger and at least one conservative radio host, “Frozen” is a devious example of a “gay agenda” to turn American children into homosexuals.  Worse yet, it also promotes bestiality.

Say what?

Let’s start with the bestiality part.  You see, concerned Americans don’t like the friendship between Kristoff (the mountain man) and Sven (his reindeer).  Well, um, OK, but if you think that that is coded bestiality, then you’re going have to give up on America’s most red-blooded story type of all: the Western.  I mean the old joke used to go that at the end of the typical Western the cowboy hero kissed his horse and not the girl, but we weren’t supposed to take that literally.

But what about the “gay agenda” thing?  Well, it goes something like this: Elsa, the princess with secret powers, isn’t very popular, and she doesn’t have a boy friend.  Obviously, then, her powers are a metaphor for her homosexuality.  Then, the fact that her sister princess (the redhead) is forced into a marriage she doesn’t want, is clearly an attack on heterosexual marriage.  And, finally, the popularity of Elsa in the happy ending of the movie is blatantly a message to America to embrace its erstwhile ostracized homosexuals.

[Insert forehead slap here].

I’m sorry, but this is not a good semiotic analysis.  Semiotic analyses do not seek out hidden allegories without textual support.  They begin with a precise sense of the denotation of the sign, what exactly one is observing, and move to what such denotations may signify.  In this sense, if I was to pursue my earlier analysis of the film, the princesses are white; their features are stylizations of characteristically northern European appearance.  They are princesses; they do live in a “kingdom.”  These are medieval phenomena, and the question then becomes what do such manifest facts culturally connote in what is a bourgeois society transitioning away from having a Caucasian majority?  Whatever answers one gives to such questions must be abductive:  that is, in C.S. Peirce’s sense of the term, they must constitute the most likely interpretations of the signs.

When an interpretation gets into wildly unlikely interpretations of what isn’t remotely denotatively present, there’s bound to be trouble.  And when one piece of “evidence” offered in support of the “gay agenda” thesis is that “the Devil” may have purchased the Disney Corporation in order to corrupt America’s children, um (I know I am using this pseudo-word a lot here, but, um, well, what else can one say in this overly sensitive world?), you really know that you’ve got semiotic Trouble with a capital “T”.

I know that we have been here before, that Fredric Wertham’s 1954 Seduction of the Innocent accused America’s comic book writers of trying to turn American boys into homosexuals (Batman and Robin, get it?), but to see this in 2014 .  .  .  ?

Wait a minute: here is our cultural signifier for the day.  When people are, with apparent seriousness, reviving Cold War style, McCarthyite attacks on popular culture (that’s the denotation of the sign here), it is a reasonable interpretation that such people are, well, reviving Cold War era McCarthyite politics.  When you situate this “gay agenda” interpretation of “Frozen” into a cultural system that includes Arizona’s recent attempt to make discrimination against gays legal on “religious” grounds, not to mention the Chick-fil-A controversy, the Duck Dynasty controversy, and all the anti-gay marriage referenda that have been passed, this is quite a likely abduction.  After all, in such a world gays are the new “communists”.

Maybe it’s time to start playing Bob Dylan’s “Talkin John Birch Paranoid Blues” again.  I need some comic relief.

Comments: (0)
Categories: Jack Solomon
Read All Jack Solomon

The Popular Art of Dystopia

posted: 3.14.14 by Jack Solomon

I’ve recently had occasion to participate in some classroom discussions of two famous dystopian stories: Shirley Jackson’s “The Lottery” and Suzanne Collins’ Hunger Games trilogy.  Well of course, how could one not discuss Jackson’s classic in a contemporary literature class without invoking Collins’ work, and, conversely, how could one not discuss Collins without citing Jackson’s chilling predecessor?  But as I contemplated these two stories I realized that in spite of all that they have in common—after all, they are both visions of societies that, in effect, practice human sacrifice—there is a crucial difference between them, a difference that can help us, tentatively and incompletely, identify at least one distinction between “high art” and “popular art”.

I’m not referring here to the fact that the Hunger Games trilogy is an infinitely more complex tale than is “The Lottery,” written in the tradition of fantasy story telling while unfolding a vast allegory of a socioeconomically unequal America that is devouring its own children, though of course there are those differences.  I am referring instead to something very simple, very basic, something that is so obvious than when I asked students to identify that difference they seemed puzzled and couldn’t seem to grasp what I was getting at.  So here it is: the endings of the two stories, how they come out.  Now, those of us who have been trained in the close reading of literature may forget sometimes just how crucial the ending of a story is, but for the ordinary, mass reader, as it were, it is essential, and it is the difference in the endings of  ”The Lottery” and the Hunger Games trilogy that I want to explore here.

Let’s begin with the Hunger Games trilogy.  Though it takes three large novels to do so, and there is much suffering, death, and destruction along the way (not to mention betrayal and moral ambiguity), in the end the tyrannical society of Panem is overthrown in a popular rebellion.  Not only that, but Katniss, the heroine of the trilogy, lives to marry Peeta and look back on the triumphant (if traumatizing) life that she has led.  It hasn’t been easy, and there has been some collateral damage, but the bad guys lose, the good guys win, and, all in all, there’s a happy ending.

Compare this to “The Lottery.”  It has a female protagonist (sort of), too: Tessie Hutchinson.  But while Tessie is certainly to be pitied, she is hardly someone to identify with, and even less a heroine who can bring hope to a hopeless situation.  Content to go along with the hideous ritual of her society until she becomes its victim, Tessie isn’t even a good martyr, and her death at the end does not lead to a rebellion.  With the chilling conclusion of the tale we can be certain that next year “the lottery” will be held again.

And there you have it: while I would not presume to explicate all of the potential readings of this magnificent story, I dare say that we can say that it is a story that presents us with something horrible not only in the human condition but within human nature itself.  Written in 1948, “The Lottery” had behind it the only-too-true history of the Holocaust, which makes it far more than an allegorical critique of mere social “conformity”.  And, not too surprisingly, the original response to the story was rather negative, because, unlike the Hunger Games trilogy,” there is nothing to cling to here: no plucky heroine, no rebellion, no victory in the end over evil, no happy ending .  .  .  nothing but pure bleakness.

 

Which takes me to my point.  For while the difference between “high art” literature and popular literature is historically contingent, fluid, and indeterminate, whenever I am asked for (or feel the need to propose) a way of distinguishing between “high art” and “popular art”, I suggest that high art gives what we need, while popular art gives us what we want.  A commodity for sale, popular art must offer its purchaser something desired, and pleasure is usually what is wanted.  It is a pleasure to see Katniss survive (along with the boy who will become her husband in the end); it is a pleasure to see the tyrants of Panem fall; it is a pleasure to identify with Katniss (or Frodo, or Harry Potter, or Batman, or any fantasy hero who, one way or another, defeats evil in the end).  But reality doesn’t work out that way, and, corny as this may sound, we need artists to tell us that.  Because when we succumb to the fantasy that we have paid for, the vision of the happy ending that makes us feel good, we are all the less likely to try do anything about the evils that make us feel bad.  This is why it matters that “high art” literature is being pushed aside in favor of popular literature in the literary marketplace, because while we all need to be entertained, we need to see the truth some time as well.

Comments: (3)
Categories: Jack Solomon
Read All Jack Solomon

Balancing Mythologies

posted: 2.27.14 by Jack Solomon

In a recent classroom discussion concerning the extraordinary attraction of digital social networking, and the possible significance of that attraction, one of my students (among many astute observations made throughout the class) noted that there was something about social networking that suggested that people felt that their personal experience wasn’t valid somehow unless it could be shared on Instagram, Snapchat, Pinstagram, etc.  This is a strikingly significant observation, and I would like to pursue it further here.

That large numbers of people in a media-saturated era should feel the need to share and broadcast their experiences in order to “authenticate” (or more fully realize) them, is not at all surprising.  After all, with the advent of the cinema—and the celebrity system that accompanied it—a century or so ago, the prospect of having one’s being expanded, both literally and figuratively, on a big screen became one of the key attractions of mass culture.  “Celluloid heroes never really die,” as Ray Davies has put it, and their lives take on dimensions that transcend those of ordinary folk.

But with the advent of social media, anyone can broadcast oneself—can, that is to say, become a subject of the mass media, and while a hundred “friends” on Facebook” and a handful of “followers” on Twitter doth not a celebrity make, the feel of mass media fame is there for the taking, and hundreds of millions of people have jumped right in and taken it.

There is clearly something intoxicating, and even addictive, about living one’s life online, in posting oneself through an image or a tweet or a comment and eagerly awaiting the response.  I believe that this desire to be acknowledged, to have one’s experience validated, as it were, is a key part of the attraction of social networking.  It is a very basic human need and is a central component of that social characteristic called “hetero-directedness.”  To be hetero-directed is to live your life in relation to what others think about you.  Children and adolescents are especially hetero-directed, but so too are adults ambitious for fame or who purchase things according to their status value (what Marx calls “commodity fetishism” is a form of hetero-directedness).

When we look at American history, we can find prominent examples of hetero-directedness, especially among the Congregationalists (better known as the Puritans) who settled the New England colonies.  For the Congregationalists life was lived not only in relation to their God but also in relation to everyone else within the congregation.  Indeed, it was the responsibility of every Puritan to demonstrate to others the signs of their salvation in order to be admitted into the congregation. In more recent times, the intense pressure for social conformity in the 1950s can also be described as an especially hetero-directed era.

With such a history, one might say that hetero-directedness is an American mythology, a social value, and that the advent of digitally-enabled social networking is raising that mythology to new prominence in the era of the global “hive.”  But as with so many American mythologies, there is a contrary tradition in our history—one that we can associate with such voices as Emerson’s and Thoreau’s—which values individualism and self-reliance, and that mythology appears to be declining in relation to the resurgence of the mythology of hetero-directedness.

Is this something that we should care about?  Well, of course, the answer depends upon one’s own ideological inclinations.  Both hetero-directedness and individualism have their attractions, and both have their problems.  A hetero-directed society, for instance, can be a socially responsible one, a society where people care for and take care of each other.  But it can also be a place of compulsory conformity governed by a tyranny of the majority.  Indeed, as actress Ellen Page put it in a recent speech, hetero-directedness— living too much according to the expectations (and judgments) of others, can lead to a loss of self and authenticity.

An individualistic society, on the other hand, can be a site of freedom and opportunity, but it can also devolve into anti-social anarchy and even socio-pathology if taken too far.  There are plenty of signs of the latter in the current environment, and they are no less a concern than the specter compulsory conformity.

So, we have two conflicting mythologies.  Are we compelled to choose simply one or the other?  In my own ideological view, the answer is “no,” because there is another American mythological tradition that is often forgotten in these highly polarized times.  This is what could be called the “mythology of the middle,” the belief, voiced in the eighteenth century by St. John de Crevecoeur, that America is a land where the extremes have been flattened out, where people sought economic “competence,” not luxury, and where the ethnic, religious, and class differences that polarized societies elsewhere were reconciled here in the shaping of a new identity, that of the American.

Crevecoeur’s belief, of course, like so many cultural mythologies, clashed with the realities of his times.  It was a goal, not an accurate description of America.  But as a goal it offers a highly worthy mythology for our fractured times, and it can be applied to the conflicting visions of individualism and hetero-directedness.  Maintained in a dynamic balance, the two can complement each other, accenting what each has to offer while muting their dangers when taken to extremes.  Put it this way: it’s fine to post something up on Instagram now and then, but if you can’t take a walk by yourself in the woods without your smart phone, busily posting selfies while exchanging tweets and text messages, perhaps it’s time to think deeply about what you are doing.

Comments: (0)
Categories: Jack Solomon
Read All Jack Solomon

Of Puppies and Paradoxes

posted: 2.13.14 by Jack Solomon

In my last blog I discussed the difference between a formalist semiotic analysis and cultural one.  In this blog I would like to make that discussion more concrete by looking at one of the most popular ads broadcast during Super Bowl XLVIII.  Yup, “Puppy Love.”

Let’s begin with the formal semiotics.  This is an ad with a narrative, but the narrative is conducted via visual images and a popular song voice over rather than through a verbal text.  The images are formal signs that tell us a story about a working horse ranch that is also a permanent source of puppies up for adoption—as signified by the carved sign permanently placed in front of a ranch house reading “Warm Springs Puppy Adoption.”  It is also important to note that while the ad could be denoting a dog rescue operation, the fact that we see a pen full of nothing but Golden Retriever puppies who are all of the same age suggests that it is more likely that the young couple who run the ranch and the puppy adoption facility are Golden Retriever breeders.  We’ll get back to this shortly.

The visual narrative informs us, quite clearly, that one of the puppies is close friends with one of the Clydesdale horses on the ranch, and that he is unhappy when he (or she, of course) is adopted and taken away from the ranch.  We see a series of images of the puppy escaping from his (or her) new home by digging under fences and such and returning to the ranch.  After one such escape, the Clydesdales themselves band together to prevent the return of the puppy back to his adoptive home, and the final images show the puppy triumphantly restored to his rightful place with his friend on the ranch.

It’s a heartwarming ad with a happy ending that is intended to pull at our heartstrings.  And that leads us to our first, and most obvious, cultural semiotic interpretation of the ad.  The ad assumes (and this is a good thing) a tender heartedness in its audience/market towards animals—especially puppies and horses.  It assumes that the audience will be saddened by the puppy’s unhappiness in being separated from his Clydesdale buddy, and will be elated when the puppy, together with Clydesdale assistance, is permanently reunited with his friend.  Of course, audience association of this elation with a group of Clydesdales (Budweiser’s most enduring animal mascot) will lead (the ad’s sponsors hope) to the consumption of Budweiser products.

So, what’s not to like?  The first level of cultural semiotic interpretation here reveals that America is a land where it can be assumed that there are enough animal lovers that a sentimental mass market commercial designed for America’s largest mass media audience of the year will be successful.  Heck, (to reverse the old W.C. Fields quip) any country that likes puppies and horses can’t be all bad.

But there is more to it than that.  As I watch this ad I cannot help but associate it with a movie that was made in 2009 called Hachi: A Dog’s Tale.  The movie was directed by an internationally famous director (Lasse Hallstrom) and starred actors no less than Richard Gere and Joan Allen (with a sort of cameo played by Jason Alexander).  And it was never released to U.S. theaters.

Yes, that’s right.  While Hachi: A Dog’s Tale was released internationally, received decent reviews, and even made a respectable amount of money, this Richard Gere movie has only been accessible to American audiences through DVD sales.  With talent like that at the top of the bill, what happened?  Why wasn’t it released to the theaters?

Well, you see, the movie is based on a true story that unfolded in Japan before the Second World War.  It is the story of an Akita whose master died one day while lecturing at his university post and so never returned to the train station where the Akita had always greeted him upon returning home.  The dog continued to return to the train station most (if not every) evening for about ten years, sometimes escaping from his new owners in order to do so.  He finally was found dead in the streets.

Hachiko, the original name of the dog, is a culture hero in Japan, and there is a statue of him at the train station where he kept vigil for ten years.  A movie about him was made in Japan in 1987, and while the U.S. version is Americanized, it is pretty faithful to the original story and to the Japanese film.

Which probably explains while it never was released for U.S. theatrical distribution.  I mean, the thing is absolutely heartbreaking.  Have a look at the comments sections following the YouTube clips of the movie, or the Amazon reviews of the DVD: almost everyone says the same thing: how they weep uncontrollably whenever they watch the thing.  It is significant that the DVD cover for the movie makes it look like a warm and fuzzy “man’s best friend” flick that children and Richard Gere fans alike can love.  Yes, it’s a rather good movie (the music is extraordinary), but warm and fuzzy it ain’t.

And this takes us to the next level of interpretation of “Puppy Love.”  Like Hachi, the puppy in the ad doesn’t want to be separated from someone he loves.  But unlike Hachi, the puppy is happily reunited with his friend in the end.  His tale is a happy one—and an unrealistic one.  It is a wrenching experience for all puppies to be adopted away from their families (which are their initial packs), but they don’t tend to be allowed to go back.  And animals are permanently separated from the people whom they love (and who loved them) all the time due to various circumstances which can never be explained to them.  This is what makes Hachi: A Dog’s Tale so heartrending: it reveals a reality that it is not comfortable to think about:  evidently this was too much reality for theatrical release.

So “Puppy Love” distracts us from some uncomfortable realities, including the fact that puppies are bred all the time as commodities who will be separated from their original homes (that’s why the fact that the “Puppy Adoption” sign in the ad seems to indicate a breeding operation is significant) and have their hearts broken.  The ad makes us feel otherwise: that everything is OK.  This is what entertainment does, and that is what is meant by saying that entertainment is “distracting.”  But feeling good about puppy mills isn’t good for puppies.  And feeling good about the many hard realities of life can lessen audience desire to do something about those realities.

And that takes us to a broader cultural-semiotic interpretation:  as Max Horkheimer and Theodore Adorno suggested over half a century ago, the American entertainment industry has been working for many years to distract its audience from the unpleasant realities of their lives, thus conditioning them to accept those realities.  Horkheimer and Adorno have gone out of fashion in recent years, but I still think that they have a lot to tell us about just why Americans continue to accept a status quo that is anything but heart warming.

Comments: (3)
Categories: Jack Solomon
Read All Jack Solomon

The Goal of Cultural Semiotics

posted: 1.30.14 by Jack Solomon

As I begin a new semester of teaching popular cultural semiotics, I’d like to succinctly sum up here—both for any of my students who may drop in to read this and, of course, for anyone else who may be interested—what the goal of cultural semiotics is.  The first thing to note is the qualifier “cultural”:  that is, while cultural semiotics most certainly includes semiotics, as such, there can be a crucial difference between what a cultural-semiotic analysis is looking for and what other sorts of things semiotic analyses do.  For example, a semiotic analysis can be entirely formalistic in nature, seeking to decode the particular signs and symbols within one’s subject—as, for instance, the sort of thing that I have seen in an online interpretation of Breaking Bad that focuses on the way colors were used in the show to signify character traits.  Such analyses can be quite similar to a New Critical reading of a text, and they are very useful indeed in the performance of a cultural-semiotic analysis; but a cultural-semiotic analysis goes beyond this to cultural signifiers that transcend such formal particulars.

Taking as axiomatic that nothing in our commercialized popular culture would exist if there was not some expectation that it would find a market or audience, cultural semiotics, that is to say, analyzes the consumption of popular culture, and what that may say about its consumers.  Since the consumption of entertainments and many (if not most) consumer goods is voluntary (e.g., no one is forced to watch TV or the movies), we can assume that something in a popular cultural topic is attractive to its consuming audience.  To put this another way, to say that something is “only an entertainment” or is “only fashionable” is to miss the point: cultural semiotics begins with the presumption that the artifacts of popular cultural are intended to be entertaining or fashionable, and then asks what is the significance of the fact that large numbers of people are entertained or attracted by this?

Saying that a movie or TV series, for instance, is entertaining because it is “distracting,” however, isn’t saying enough.  Yes, entertainments are distractions, but audiences are distracted by different kinds of entertainments at different times in history, so a cultural-semiotic analysis situates its topics not only in the present but also in relation to the past to see what differences may distinguish current popular cultural artifacts from past ones, and these differences guide the way to their interpretation.

Thus, unlike a formalistic semiotic analysis, which can focus on a single topic as if it was frozen in time, a cultural-semiotic analysis has to contextualize its topics historically.  Often the same object of analysis means something different at different times, and those differences in meaning reflect differences in cultural consciousness.

It is also important to note that a cultural-semiotic analysis is not an expression of esthetic taste or preference: that is, it is not a “review” or an opinion of whether something is entertaining or not.  At the same time, a cultural-semiotic analysis is not a moral judgment.  One may have moral opinions about the significance of one’s topic, but those opinions are not a part of the analysis, which is concerned with what is, not with what ought to be.  Similarly, while a cultural-semiotic analysis commonly involves politics, its politics is descriptive, not prescriptive.  Of course, the analyst will inevitably have political opinions with respect to the politics of a topic, but the expression of those opinions, while they may form part of the conclusion of an argumentative essay, is not what the analysis seeks—which is to be, as far as possible, an objective assessment of social meaning.

In this regard, cultural semiotics, while pioneered by Roland Barthes, tries to avoid the kind of political self-privileging that Barthes explicitly claims in his book Mythologies when he identifies “myth” with the bourgeois Right and (rather laboriously) seeks to exempt the Left from “mythic” discourse.  For a cultural-semiotic analysis in the sense I am describing here, mythology—the coded systems of signs within which cultures live and communicate—can be found everywhere, and can be decoded accordingly.

So, whether you are looking at Lady Gaga or Duck Dynasty, the goal is the same: a cultural analysis of what their popularity signifies.  You can be a fan, or not, of their esthetics and/or politics, but your cultural-semiotic analysis isn’t concerned with that.  It is concerned with social significance.

Comments: (0)
Categories: Jack Solomon
Read All Jack Solomon

A Digital Dilemma

posted: 12.19.13 by Jack Solomon

While I realize that the problem is not really a brand new one, I have only recently become aware that there is a lot of very good popular cultural analysis available on the Internet in video form.  Well, what’s wrong with that?  After all, the Internet is an absolutely indispensible resource for popular cultural semiotics, a treasure trove of up-to-date primary and secondary source material that I now wonder how I ever did without in my own writing and teaching.  So how could there possibly be a problem here?

The problem, I’m afraid, is that video-format analyses are not detectable by Turnitin or by any other language based search engine detection method.  For while a video certainly contains plenty of language, its sentences are spoken, not written, and thus are not able to be captured by any search method of which I am aware.  My concern here is not that intellectual property may therefore be appropriated without attribution (after all, the producers of such video content generally are open source aficionados with little interest in personal copyright protection) but that student writing may be rewarded for undocumented insights that are not the student’s own.  And this, as I tell all of my students when I explain the rationale for academic strictures with respect to plagiarism, is why I am a rigorous enforcer of such strictures.

That is, if a student paper is filled with sharp but undocumented analyses that an instructor believes to be the student’s who wrote the paper, that paper is likely to get a higher grade than a paper that was written honestly, and that is stealing not so much from the true author of the analyses (who, I admit, does not lose anything thereby) but from other students (who may lose a lot because the playing field has been tilted).  It’s the same thing as with the use of performance enhancing drugs in sports: someone gets an illicit extra edge.  That matters to me, and should matter to our students as well.

I am not going to name any particular video analyses here for various reasons, but I will describe one way that an instructor of popular cultural semiotics can both detect and avoid their successful illicit use—beyond, of course, explaining to students that they must be documented just as any other source must be documented.  This is to construct assignments that require students to set up their own systems of association and difference within which to situate their topics as signs within a semiotic system.  Online video analyses tend to be formalistic in manner, focusing an analysis of a film or television program much in the manner of a New Critical reading of a text.  Such analyses can be quite clever and enlightening with respect to the individual popular text, but they aren’t the goal of a popular cultural semiotic analysis, whose goal is to interpret the social significance of the text, and that requires broad contextualization.  To put this another way, the goal of a cultural-semiotic reading of a text is not an intrinsic description of its signs and symbols, it is an extrinsic interpretation of that text with respect to social history.

So, here is something else to keep in mind both when assigning and reading student essays in a class on popular cultural semiotics—and, for that matter, student essays in any class on any subject.  Youtube, et al (and this includes university based sites, as well, I suppose, as the TED lectures) is filled with useful, but undetectable, material.  I hadn’t thought of it before, but it is worth passing the word along.

Comments: (0)
Categories: Jack Solomon
Read All Jack Solomon

Anniversaries

posted: 12.5.13 by Jack Solomon

On December 8 it will be the 33′d anniversary of the death of John Lennon.  In this year of historic anniversaries (the Gettysburg Address’s and the Battle’s 150th; the 50th year since the assassination of President John F. Kennedy), Lennon’s will not loom so large, and that is as it should be.  There are vaster things to think about.

But I wish to ruminate a bit on John Lennon here—not on his music nor even on the man himself, but, rather, on his place in history.  Because while John Lennon never set out to do so and was always more than a bit uncomfortable with the idea, he did change the world.  Or, to be more precise, the world changed, hurtling him and his fellow Beatles to the head of a raucous procession that they never intended to lead, but without whom the parade might never have begun.

For what the Beatles did was to begin the process through which our modern entertainment culture has been built.  By “entertainment culture” I mean that state of affairs in which entertainment, once set aside for special holiday and leisure moments (what Henri Lefebvre called “Festival”), has become the dominant feature in our lives.  In an entertainment culture, everything is expected to be entertaining, and entertainers are the focus of everything.  It is in an entertainment culture that most people don’t watch the news, they watch infotainment, and the news strives, in order to survive, to be more entertaining.  Indeed, it is in an entertainment culture that Miley Cyrus is news, and continues to be news.

The Beatles did not invent this.  After all, in the Golden Era of Hollywood, the stars of the silver screen founded the era of the celebrity, and Babe Ruth was as popular as any current sports hero.  Music has known its Bobby Soxers, and Elvis Presley is still the King.  But the worldwide hysteria that greeted the Beatles in 1964—the Beatlemania that is the standard against which all popular cultural phenomena ever since have been measured—still marked a quantitative change.  Whatever one thinks of the Beatles’ music (personally, I still find it magnificent, but that isn’t the point), its impact can hardly be overstated.

That impact has been two-fold.  First, it demonstrated that the youth market (and the potential for that market) was greater than had ever before been appreciated.  Elvis was one thing, and so was Sinatra, but their effect was nothing like this.  And second, the realization of the potential of the youth market (especially in America) gave the young power that they had never had before.  You could say that the Beatles were in the right place at the right time: just at the point when the largest generation of children in American history were beginning to grow up, coddled and restless and groping for their own place in the world.  The Beatles, who only wanted to sell records, opened up a way.

One might say that the Beatles were the match that lit up the Baby Boom generation and launched America into the full tide of what is still a youth culture.  But because the spark lay in entertainment, as opposed to other forces that have moved masses of people in years past, it could easily be commodified, and thus coopted.  The “revolution” that came so readily to the lips of Baby Boomers in the 1960s could never become serious when it came wrapped in pleasure and was little more than a pose to sell records (the Rolling Stones were never really street fighting men).

And so, the paradoxical legacy of those Liverpudlian moptops who seemed to challenge the Establishment back in ’64 has been mostly a huge boon to capitalism, helping to launch a hedonistic consumer society grounded in entertainment.  It all probably would have happened eventually without the Beatles, but that doesn’t change the fact that they were the ones at the center of it all, Pied Pipers to a future that is now.

Comments: (0)
Categories: Jack Solomon
Read All Jack Solomon

Thor

posted: 11.21.13 by Jack Solomon

Well, the dude with the big hammer just pulled off the biggest box office debut for quite some time, and such a commercial success calls for some semiotic attention.

There is an obvious system within which to situate Thor: The Dark World and thus begin our analysis. This, of course, is the realm of the cinematic superhero, a genre that has absolutely dominated Hollywood film making for quite some time now. Whether featuring such traditional superheroes as Batman, Spider Man, and Superman, or such emergent heavies as Iron Man and even (gulp!) Kick-Ass, the superhero movie is a widely recognized signifier of Hollywood’s timid focus on tried-and-true formulae that offer a high probability of box office success due to their pre-existing audiences of avid adolescent males. Add to this the increasingly observed cultural phenomenon that adulthood is the new childhood (or thirty is the new fourteen), and you have a pretty clear notion of at least a prominent part of the cultural significance of Thor’s recent coup.

But I want to look at a somewhat different angle on this particular superhero’s current dominance that I haven’t seen explored elsewhere. This is the fact that, unlike all other superheroes, Thor comes from an actual religion (I recognize that this bothered Captain America’s Christian sensibilities in The Avengers, but a god is a god). And while the exploitation of their ancestors’ pagan beliefs is hardly likely to disturb any modern Scandinavians, this cartoonish revision of an extinct cultural mythology is still just a little peculiar. I mean, why Thor and not, say, Apollo, or even Dionysus?

I think the explanation is two-fold here, and culturally significant in both parts. The first is that the Nordic gods were, after all, part of a pantheon of warriors, complete with a kind of locker/war room (Valhalla) and a persistent enemy (the Jotuns, et al) whose goal was indeed to destroy the world. (That the enemies of the Nordic gods were destined to win a climactic battle over Thor and company (the Ragnarok, or Wagnerian Gotterdammerung), is an interesting feature of the mythology that may or may not occur in a future installment of the movie franchise.) But the point is that Norse mythology offers a ready-made superhero saga to a market hungering for clear-cut conflicts between absolute bad guys whose goal is to destroy the world and well-muscled good guys who oppose them: a simple heroes vs. villains tale.

You don’t find this in Greek mythology, which is always quite complicated and rather more profound in its probing of the complexities and contradictions of human life and character.

But I suspect that there is something more at work here. I mean, Wagner, the Third Reich’s signature composer, didn’t choose Norse mythology as the framework for his most famous opera by accident. And the fact is that you just don’t get any more Aryan than blonde Thor is (isn’t it interesting that the troublesome Loki, though part of the Norse pantheon too, somehow doesn’t have blonde hair? Note also in this regard how the evil Wormtongue in Jackson’s The Lord of the Rings also seems to be the only non-blonde among the blonde Rohirrim). The Greeks, for their part, weren’t blondes. So is the current popularity of this particular Norse god a reflection of a coded nostalgia for a whiter world? In this era of increasing racial insecurity as America’s demographic identity shifts, I can’t help but think so.

Comments: (1)
Categories: Jack Solomon
Read All Jack Solomon

Halloween

posted: 11.8.13 by Jack Solomon

With October 31st being the submission deadline for this, my 78th Bits blog, I thought I’d turn to answer a question a student of mine asked about the significance of the sorts of costumes being marketed to women these days for Halloween wear.  Well, that one’s pretty easy: in a cultural system that includes such phenomena as a young Miley Cyrus seeking to shake off her Hannah Montana image by (to put this as politely as possible) making an erotic spectacle of herself in order to succeed as a grown-up singer, the immodest (let’s stay polite) wear marketed to women at Halloween is just another signifier of what Ariel Levy has quite usefully labeled a “raunch culture.”  Whether or not such explicit displays (and expectations thereof) of female sexuality constitute a setback for women’s progress (which would be a Second-wave feminist assessment of the matter) or an advance (which might be a Third-wave interpretation) is not something I want to get into here.  It’s Halloween as a cultural sign that I’m interested in now.

To see the significance of the contemporary Halloween, we need (as is always the case with a semiotic analysis) to situate it within a system of signs.  We can begin here with the history of Halloween.  Now, whether or not Halloween is a Christianized version of an ancient pagan harvest festival, or, as, All Hallow’s Eve, is simply the liturgical celebration of the saintly and martyred dead that it claims to be at face value, is not something we need be concerned with.  More significant is that neither of these meanings have been operative in modern times, when Halloween became a children’s holiday: a night (with no religious significance whatsoever) to dress up in costume and go trick-or-treating for free candy.

But in these days of an ever more restricted children’s Halloween, with parental escorts or carefully monitored parties taking the place of the free range trick-or-treating of a generation and more ago, along with an ever expanding adult celebration of Halloween, we can note a critical difference, which, as is usually the case in a semiotic analysis, points to a meaning—actually, several meanings.

The first is all too painfully clear: whether or not we actually live in more dangerous times (which is a question that has to be left to criminologists), we certainly feel that America has become a place where it is not safe to let children roam about on their own at night.  The trust that Americans once had for each other has certainly evaporated, and the modern Halloween is a signifier of that.  (One might note in this regard the ever more gruesome displays that people are putting up in their front yards: yes, Halloween began as a celebration of the dead, but this obsession with graphic and violent death hints at an insensitivity to real-life suffering that does not do much to restore that old trust.)

But as Halloween has shrunk in one direction, it has exploded in another, becoming one of the premier party nights of the year for young adults.  Joining such other originally liturgical holidays as Mardi Gras, today’s Halloween is essentially a carnival—an event that has traditionally featured an overturning of all conventional rules and hierarchies: a grand letting off of steam (sexual and otherwise) before returning to the usual restrictions on the day after.  Dressing in costume (whether along the more traditional lines as some sort of ghoul, or as some other more contemporary persona), enables a freedom—a licentiousness even—that ordinary life denies.  At a time when, in reality, the walls are closing in for a lot of Americans, carnivalesque holidays like Halloween are, quite understandably, growing in popularity.

There is more to it than that, of course. A further significance is the way that Halloween, like so many other American holidays (both religious and secular), has become a reason to buy stuff—not only costumes and food and candy, but also decorations, Christmas style, that start going up a month or more before the holiday arrives.  Like Valentine’s Day, and Mother’s Day, and Father’s Day, and, of course, Christmas, Halloween is now part of a different sort of liturgical calendar: a signifier of the religion of consumption.

And no, I don’t celebrate Halloween.  October 31st has a very private significance for me: on that day in 1980 all of my Harvard dissertation directors signed off on my PhD thesis.  I think of it as the Halloween thesis.  I suppose that my doctoral gown is a costume of sorts, but I haven’t worn it in years.

Comments: (0)
Categories: Jack Solomon
Read All Jack Solomon