Article begins

Ilana Gershon asked nine anthropologists for their approaches to the many daunting tasks of publishing an article in a journal, based on questions generated by Sandhya Narayanan. This installment explores the following question:

What are the spoken and unspoken metrics of publishing in your experience? Do some types of publications or venues count more than others?

Deborah Gewertz: The only publications that matter are peer-reviewed publications, at least at my (pretty elite) liberal arts college. When I’ve been on the “tenure and promotion committee,” administrators and colleagues do speak about the ranking of presses and journals, although they may not know these for sure, as they vary field to field. Thus when up for tenure or promotion, the letter a candidate writes to such a committee might include this information when describing their publishing accomplishments.

Jason Jackson: As an author and as a colleague charged with assessing the work of other colleagues, I try to consider and combine as many metrics (qualitative and quantitative) as I can and to make the unspoken ones more legible. As an author, I would like to know (or at least have a sense of) a journal’s acceptance rate and its average time-to-publication. I might wish to submit a particular work to a journal with a very low acceptance rate or I might have good reason to do the opposite based on my goals for the work and the circumstances in my career at the time. I might be indifferent to a journal having a backlog in its “pipeline” or be very anxious about sooner-is-better. As an author and as someone assessing colleagues (in tenure reviews, for instance) I am interested in how widely available and how discoverable works published by a particular journal are. This takes in questions of paywalls, digital platforms, and varieties of open access, but also includes things like discoverability, readership metrics, various journal-level citation-based impact factors, and various kinds of of article-level metrics. I do not think that any metric or group of metrics should be used categorically. One colleague may be establishing a leadership role within specific disciplinary or sub-disciplinary conversations. Another may be working toward breathtaking knowledge a particular time and place. A third may be trying to influence public policy while a fourth aims to bring their field to the task of reshaping a neighboring discipline. Each of these colleagues would develop a very different publishing program.

Daniel Monterescu: The social and institutional logic of the field is quite clear to me and it’s quite depressing. First is the bias against certain languages and book chapters that are often not acknowledged by certain academic institutions for promotion and employment purposes. Second is the clear hierarchy between journals. I had published articles I really liked and believed were strong in specialized journal or second-tier journals and regretted to see they reached a significantly lower number of readers. Unfortunately, I see a clear correspondence between the ranking of the journal and the readership impact. While this is true for English language (mainly American) publishing field, I find it less consequential in other languages that are less stratified.

Carolyn Rouse: The H-Index is taken seriously by people in the sciences and quantitative social sciences. But people who know better ignore it. Some of the worst articles and theories (e.g., “Tragedy of the Commons”) are some of the most cited. Most tenure committees still take seriously the journals with the highest standards and lowest rates for acceptance. And many editors and tenure committees take a journal’s impact factor seriously. Again, this is not a perfect measure since we often assign or download articles that we think the students will enjoy and get something out of, but not necessarily the articles we think are the most theoretically and ethnographically critical to the discipline. As anthropologists, we know that publication metrics are problematic.

Janelle Taylor: This is something that really varies by field and subfield. In general, there’s a certain prestige attached to flagship journals of professional associations, and aiming to publish in those is good, especially earlier on as you are working to establish yourself. These days there are so many more varieties of journals, and the lines between blogs and journals can get a bit blurry; one important dividing line to look for is whether a venue is blind peer-reviewed or not. If you’re trying to apply for jobs or postdocs or want to come up for promotion, then you might want to think twice about publishing in a venue that isn’t peer-reviewed.

Matt Tomlinson: About ten years ago, the Australian Research Council ranked thousands of journals. Ones ranked “A*” (like an A plus) were considered the best. An “A” journal was excellent; a “B” was good; a “C” was adequate. If a journal didn’t make the list, it was considered unimporant. As you can imagine, there was a fierce backlash to this exercise, for good reasons. Some of the rankings seemed to be inflated or deflated by motivated editors. The rankings threatened to create an unproductive winners-and-losers system in which top-ranked journals got swamped by submissions and perfectly good journals went begging. And sometimes a lower-ranked or unranked journal is the best place to publish a particular article, and you should not be penalized for choosing it. Yet—I probably shouldn’t admit this, but I will—especially when reading interdisciplinary work, I have snuck a glance at the ratings now and then to get a rough sense of whether a journal is held in high regard or not. Because, let’s face it, we do this kind of evaluation ourselves all the time, if informally.

Claire Wendland: One of my professional feet is in the world of anthropology, the other in the world of medicine. The unspoken metrics are entirely different in these two, and success in one looks like failure in the other! The professional importance of books in academic anthropology is hard to overstate. Next up, solo-authored articles in four-fields journals (even though we are in theory a collaborative discipline), or solo-authored book chapters in edited collections with a prestigious university press. In cultural anthropology, a CV with a hundred many-authored publications in a host of different specialized journals and no books looks pretty sketchy. In medicine, on the other hand, that looks like success. A book or a book chapter seems to be taken as an irrelevant or possibly even embarrassing vanity project! The sheer number of publications, in journals with impact factors that anthropologists can only dream of, matters way more—even if each article has thirty authors or is built around the Least Publishable Unit of any given study.

Jessica Winegar: In anthropology, highly ranked anthropology journals (Cultural Anthropology, American Ethnologist, American Anthropologist) and R1 (often private) university presses count more than other venues. We need more of an honest conversation about what is gained and lost in this system of prestige.

Matthew Wolf-Meyer: This has changed a lot thanks to the internet and databases like AnthroSource. It seems to me that the esteem that people have for the flagship journals is really an artifact of the bygone era of people receiving paper copies of journals in the mail and the limited real estate in those journals—so getting something into one of those journals really meant getting it in front of most anthropologists in the days when every AAA member received a copy of American Anthropologist. But now, every Wiley journal (and more) are available through a quick search on AnthroSource, and that really seems to have leveled the field in many respects.

The outcome is that it’s less important where one publishes and more important that what one publishes is accessible to people in the field. I’m sure that some tenure and promotion and hiring committees still value publications in some journals over other journals, but in terms of impact it seems increasingly less important that one places a piece in a particular journal and more important that they place it in a journal where it will be read by the right people.

Credit: Joanna Kosinska
Photograph of two pencils

Deborah Gewertz is the G. Henry Whitcomb Professor at Amherst College and has been an associate editor of American EthnologistEthnos and the Journal of the Royal Anthropological Institute.  

Jason Jackson is the Ruth N. Halls Professor of Anthropology and Folklore at Indiana University, and the editor of Museum Anthropology Review

Daniel Monterescu is associate professor of urban anthropology at Central European University. 

Carolyn Rouse is chair of the anthropology department at Princeton University. 

Janelle Taylor is a professor at University of Toronto. 

Matt Tomlinson is an associate professor at Australian National University. 

Claire Wendland is a professor at University of Wisconsin, Madison.  

Jessica Winegar is a professor at Northwestern and editor of PoLAR: Political and Legal Anthropology Review. 

Matthew Wolf-Meyer is an associate professor at SUNY-Binghamton.

Authors

Ilana Gershon

Ilana Gershon is the Ruth N. Halls Professor of Anthropology at Indiana University. Her most recent monograph is on corporate hiring in the United States—Down and Out in the New Economy: How People Find (or Don’t Find) Work Today (University of Chicago Press, 2017).

Cite as

Gershon, Ilana. 2021. “Metrics and Publishing an Article.” Anthropology News website, September 17, 2021.

More Related Articles

Going Native: Praxis

Bernard C. Perley