Blog Post 7: chapters 4, 7, 8

 As I sat down to begin composing this final blog post, I was so confused by Chapter 4 that I decided to revisit the requirements for the blog; I wanted to be certain that what I want to say fulfills the requirements. Therefore, this post will be a bit different from my previous missives. Hopefully, it will be beneficial and constructive information.

I have found this text to be simultaneously too broad and vague while at the same time too specific. What do I mean by this? Chapter 4 serves as a perfect illustration of my point.

At first glance, I thought the chapter would be useful reading that would teach us how to format the metadata for the artifacts of our archive. I thought I would be learning what information is generally included in descriptions of an item. Broadly speaking, this was true; the chapter did provide suggestions, such as file format, type of file/artifact, size, color, date, author, etc. But the chapter did not define basic terminology, such as “leader area” on page 91. Nor does it define “record,” which may seem obvious in its meaning. But in archiving and information management, the term has a specific definition. The definition of “record” is to be found in the discussion of relational databases in chapter 1, which was not assigned reading. The only reason I know what the term “record” implies is due to prior knowledge, which brings me to this passage from page 90:

“We are working from the assumption that you have a basic understanding of descriptive practices in libraries or archives and so will be building on that beginning level of knowledge to learn how to use or adapt existing element sets and descriptive standards to describe born-digital resources.”

Okay, that seems reasonable on the face of it: anyone who has used the library’s website should have some idea of what “element sets” and “descriptive standards” are suggestive of, if not exactly what they are, and the book does define these two terms. However, in doing so, it also uses the terms “fields” and “values,” which are words with highly specific meanings when discussing relational databases. So again, we have holes in our understanding that can only be filled by looking elsewhere.

I couldn’t find any definition of “field” or “value” in the glossary, inclusion in the index, or definition in Chapter 1. Here again, prior knowledge assisted me. A relational database is made up of records. Records contain fields, which should be the same in each record. When information is searched in the database, one can locate items in different ways. Locating items by color, for example, can be accomplished because each item in the database has a record that contains the same fields. A value is whatever piece of information that is entered into a field. For example, if the database was for an archive of socks, the record for a pair of men’s brown crew socks might have fields for color or style or size. The value entered in the color field would be brown; the value for the style field would be crew.

But the values that will be entered are dependent on the rules. The rules are what tells the person entering the data into the database what value to input for each field in a record. And this brings me to back to my point: the book introduces us to MARC and BIBFRAME and the codes that are used in these “descriptive standards” (90-94). But the descriptions are so broad: “BIBFRAME is a descriptive model for bibliographic resources finalised in 2013,” or so specific they are confusing (94). For example, I did not find it helpful to know what the “options” for “the 007 field” are. Without an understanding of what a field is, I’m not sure my classmates would either.

I have created a very simple relational database for an Information Sciences class. However, after reading Chapter 4, I am glad that I do not have to make one for this class. I am in favor of a cross-disciplinary approach to subject matter (my B.A. is in Interdisciplinary Studies). But if Chapter 4 was meant as an introduction to relational databases, then perhaps the chapter title should not have been “Description.” This also backs up my complaint about being simultaneously too broad and vague, while also, somehow, too specific. If the goal is to “guide,” then perhaps it would have been helpful to know that “BIBFRAME (“Bibliographic Framework”) is a project by the Library of Congress that will implement a linked data model in creating library metadata. It is part of the effort to find a replacement for MARC” (Hirsh, Information Services Today, 152).

The possibilities that digital archiving offers for cross-referencing information and streamlining tasks are truly staggering. Chapter 7 discusses “workflows,” and the authors maintain that “perhaps the best benefit of workflows is the adjustment in analytical and strategic thinking” (153). This seems similar to saying that the best digital tools may be limited by the capabilities of their users. Time spent thinking about how to efficiently perform tasks is probably time well spent, especially when the tasks are being completed in a digital space.

Digital task management through workflow is certainly valuable. Furthermore, it supports “flexibility,” not just in thinking about how tasks should be undertaken but in how they will be accomplished, particularly when adjusting to “diminished resources” (158). The efficiency suggested by the “‘more product, less process’” approach seems applicable in a variety of scenarios outside the realm of archiving, whether digital or traditional (159). I imagine that businesses from fast food to clothing retailers probably employ workflows.

 The authors also note that “workflows naturally lead you to consider the principles guiding the actions described in the workflow” (159-160). Thus, it seems highly likely that a workflow could also serve as ways for becoming more responsive to a community of users, or methods for increasing inclusivity in an archive. With a streamlined process to follow, the biases of the archivist would be somewhat removed from the archiving process, and even serve “as excellent communicative documents for outside parties” to illustrate ways that an archive is moving towards inclusiveness (162).

The final chapter, Chapter 8, deals with the future of archiving and born-digital media. One thing that I find fascinating are non-fungible tokens (NFTs). How archives will deal with this increasingly valued digital artifact is going to be extremely interesting. In a time when a digital artifact can be endlessly reproduced and distributed the world over in the blink of an eye, ownership of the “original” seems almost oxymoronic or moot. Although the authors do not address NFTs specifically, they do reference the growing use of “cloud storage technologies” (168). Posing it as a problem of “forensics,” the authors note the importance of being able to trace a digital artifact to its source of origin and the difficulties of doing so (168). Thus, the value of ownership remains pertinent within the digital realm, whether it’s mainly for “bragging rights” or for identifying digital evidence used in a criminal investigation.

In conclusion, I have found this text useful but sometimes confusing. Although I enjoyed being introduced to familiar concepts in new ways, I think that for the true beginning archivist, the book may be a little too much information in some ways, while not supplying enough background information to enable comprehension of topics being discussed. While I enjoy the process of discovery, constant googling of unfamiliar terminology can be more of a hindrance to efficiency than I like. More “product” (in the form of glossary terms or a bit deeper index) would, in this instance, create a more efficient learning “process.”

Blog Post 6

Blog Post 6: The No-Nonsense Guide to Born-Digital Content, chapters 2 and 3

One thing that I noticed immediately from chapter 3 of The No-Nonsense Guide to Born-Digital Content was how the goals of archiving are heavily focused on preservation. As our group may choose an Indigenous culture to explore as a possible source of artifacts for our archive, it is worth noting that acquisition of digital materials seeks to preserve “the material’s original order as it came to the library or archives, along with traces of the user’s activity, potential remnants of past data, and system files and features not revealed to the regular user” (Ryan and Sampson 54). The similarity of goals between those who archive digital materials and those who restructure existing archives to make them more inclusive and respectful of Indigenous cultural artifacts are strikingly similar.

Sandra Hirsh, editor of Information Services Today, remarked that any “information organization” should seek “to adopt a passion and commitment to equity of access … [and] inclusion” (4).Thus, it is interesting to note that archiving born-digital artifacts presents some of the same challenges that archiving traditional knowledge (TK) sources and artifacts do. “Therefore, simple copying of data from these devices is equivalent to disregarding the contextual information” (Ryan and Sampson 55). Context is of enormous significance to TK sources as well. For instance, knowing the occasion and purpose for wearing particular articles of clothing helps interested parties obtain a more complete picture of the garment and the person who may have worn it, thereby conferring a measure of respect for the archived material while also being inclusive in ways that may have been missing from archives of the past.

(Photo source linked here)

One way that digital media protects itself is known as “write blocking,” which prevents changes to a file stored on media (Ryan and Sampson 56). The purpose behind a write block is to keep the file—and all of the information that is attached to the file—intact (59). While this was not possible for Indigenous materials, it is constructive to draw an analogy between dealing with digital artifacts and TK artifacts. If only all traditional knowledge sources had been write-protected, today’s culturally aware archivist would be in a better position to present TK in a respectful and inclusive manner. Write blocking prevents changes to files, acting as a sort of “forensic bridge” to prevent the disk data from alteration (57). As Indigenous artifacts were separated from their contexts, meaning and significance was often altered or destroyed. It seems as if the terminology and the goals of digital and TK archiving are the same but the tools each use may be different.

But technology is helping to restore contextual information to Indigenous archives. The similarities between archiving digital materials and a collection of TK artifacts are also found in the ease of identifying connections between digital items. For example, creating links in an archive between a Nobel-winning scientist’s profile, emails he/she/they sent, or her/his/their Spotify playlist can give interested parties a fuller picture of the mindset the scientist was in and the thought processes that were happening in the environment in which her/his/their work was being done (Ryan and Sampson 35). Such connections enrich a researcher’s understanding, and the same is true for indigenous artifacts as well. Contextual information for, say, a Native American dress could help information seekers identify the occasion the garment was created for. But the value of other pertinent information would be diminished, if no connections were made between such information and the original garment. Therefore, if the context is separated from the artifact, it diminishes the value of the information as a whole, i.e., the whole is greater than the sum of its parts. This is true for digital and traditional Indigenous knowledge alike: “Without the metadata that describes them, much of the value of our collections is diminished or even completely destroyed” (34).

I was surprised to learn that born digital content requires “a greater commitment of time and resources to preserve and provide access to over time” (31). I believe that the key phrase in this quote is “over time.” This makes sense when one considers that most books, if kept clean, dry, and in good repair, can last decades or even centuries. This is obviously less true of digital artifacts. No matter how reliable a storage device is, it is still a complicated piece of machinery subject to degradation over time or the simple flukes and fluctuations of the power grid such a device is connected to. Maintaining such devices requires specialized skills that may lie outside the  realm of the average archivist’s sphere of professional knowledge.

Perhaps archives will someday resemble the library in Star Wars Attack of the Clones, but I hope that no future librarian would conclude that if an object is not to be found within the archives then the object does not exist. Although archives that can display artifacts holographically may be some time away, 3D artifacts are currently being produced. Thus, there is a growing need for competent archiving of such born digital information (32).

(Photo source link)

As our group undertakes the task of creating an archive, remembering that our own biases will creep into our collections is an important thought to keep in mind. Even the decision on what type of items we will be archiving is a biased one: we will choose what we are interested in. If the archive were to be created for others to use, however, we would need to consider the needs of all stakeholders. In any case, we should seek to make our archive culturally responsive and respectful, as well as inclusive.

Blog Post 5: Producing the Magazine

Chapters 3, 7, 11, 13 of The Magazine from Cover to Cover

Since our group’s chose magazine topic is education, chapter 13 was particularly interesting in its discussion of ethics. This is a topic which is highly relevant in the education field. In California, K-12 educators are required to act as mandatory reporters. Therefore, the questions posed on page 342 of The Magazine from Cover to Cover are important to keep in mind when creating a magazine geared toward educators: “What duties do I have, and to whom do I owe them? What values are reflected by the duties I’ve assumed?” (Johnson and Prijatel). These questions are ones that we should be asking ourselves as grad students as well. Plagiarism or misrepresentation of source materials are huge issues in academia. Making certain that our magazine adheres to the highest ethical standards will be aided by referring to the above questions and can help us keep our standards high.

Magazines must also answer the question “[w]hat serves the reader best?” (Johnson and Prijatel 343). So when it comes to advertising, we want to make certain that the products or services we include in each issue are not just “targeted” towards educators but are things that our readers may benefit from (48). Learning that readers are more likely to purchase products that are advertised in magazines can help us choose to work with advertisers whose products will be beneficial to educators as educators (49). Running ads for overpriced cars or other luxury items may alienate our readers and signal that we view them as little more than consumers to be exploited. But exposing our readers to products that they may find beneficial to forwarding their students’ education experience can work to introduce them to products that they may not have known about. In this respect, advertisements can be beneficial to both the consumer and the producer and help us keep our magazine’s mission statement alive.

Making our magazine consistent from issue to issue is going to be something that I feel will be appealing to our readers. Knowing where to locate a particular column that they enjoy, or where the op-ed pieces are usually located, can build reader loyalty. Therefore, as a small publication in an already crowded field, it may be in our readers’ best interests to keep “the placement of elements from one issue to another” as consistent as possible (Johnson and Prijatel 289-90). This can also cut down on production costs, as our small staff will have to spend less time on arranging a new “break-of-the-book” from issue to issue (288-90). Part of the experience and attraction of a magazine is the consistency in look and content from issue to issue. I know what to expect when I pick up a copy of Vogue, for instance. Giving our readers that same satisfying feeling will help build a loyal readership.

Finding readers is of course a challenge that any magazine must face. As we decide on whether our magazine will be purely digital or produce print copies as well, we will struggle attracting eyes to our publication. As academics and K-12 teachers in particular have historically been underpaid, we will want to provide as much value for our readers as we can. Since we will definitely be publishing digitally, being aware that “search engine optimization (SEO) can bring readers to the brand through carefully developed keywords” (Johnson and Prijatel 176). Conducting informal searchers for education-oriented Twitter accounts and searching blog aggregators like Feedly can help us identify useful keywords to drive our brand and better align our content to suit potential readers’ interests.

Creating a magazine from scratch is a daunting task. With so many options to consider, I’m glad that I don’t have to worry about marketing our brand for real. It’s going to be enough work to get one issue up and running without having to start all over again once that first issue is completed. It’s fun to read and think about, but for right now, the idea of “one and done” is quite appealing. Like everything I’m exposed to in grad school, I am left feeling like the more I learn about a topic, the more I realize that I know very little about it. It’s nice to scratch the surface of magazine production, but I’m glad I’m not doing it for a living.

Blog Post 4

Chapters 6, 9, 10: The Magazine From Cover to Cover, Sammye Johnson and Patricia Prijatel

On page 233, the authors note that many magazines today resemble each other.  Although some longstanding titles retain a distinct individuality, some are homogeneous look-alikes. This is particularly noticeable in the frequent “lifestyle” special issues (or SIPs) that are near-ubiquitous and practically indistinguishable from each other in both design and content (242).

When I first began reading Rolling Stone, its design was decidedly different from other magazines at the time. It was larger, thicker, and printed on crummy newsprint-like paper. Now it resembles Time magazine minus the distinctive red border. Perhaps Rolling Stone’s “redesign” was a way of “[b]oosting readership within a certain demographic group or expanding a current base of readers” (282-283). Or perhaps it was the result of “focusing on the lowest common denominator” (245). In any case, Rolling Stone’s glossy, cramped pages no longer shout “counterculture.”

On page 256, it was interesting to note that the design of a magazine being read via an app differs significantly from a print edition and presents unique challenges for designers. This makes sense since a much smaller area is being viewed, and therefore eye movement may be necessarily different from the standard z-shaped path the reader tracks on the printed page or the computer screen (256). Including more interactive text and other clickable links makes sense too, a way to put more content within immediate reach in a small space.

Service as an article “type” had never occurred to me (222). Learning all of the different names for the types of magazine articles made me realize that I had never given much thought to categorizing them. In the book, upwards of 8 pages have a running head of “Article Types,” which run the gamut from expert advice (223) to investigative reporting (232) to the good old essay (234). I especially found the box on page 256 interesting, as it compares the terminology differences between magazines and newspapers.

Chapter 9 discusses the links between magazines and blogs but left out one blog that I read occasionally: HuffPost. Updates are done frequently, sometimes hourly. No account is needed to view content, and the “About” page states that their mission is “to report with empathy and to put people at the heart of every story” ( Thus, articles often have a human-interest slant when discussing political topics or government actions. A born-digital publication, it nevertheless resembles a magazine with its distinctive, consistent look, mission statement, and service-driven approach (146, 148-49).

One magazine discussed in some depth is Flair. Devoting over three pages to it, the authors use it as a case study of the clash between editorial wants and reader interests, summing it up thus: “Flair is one of the most popular magazine failures in American history and provides an excellent case history of the importance of mission, formula, and audience” (157).


The video above gives the viewer a taste of Flair‘s extraordinary style; it’s a preview for the book The Best of Flair.

On page 160, the authors indicate that the contemporary version of Flair is Flaunt. As I read the box about Flaunt on page 160, I immediately thought of Interview magazine. A quick search led me to the Flaunt website, and another one let me do a side-by-side comparison of the two sites: the similarities were telling. Both feature the magazine’s “logo” at the top center, surrounded by a linear menu of links (256). Interview uses a traditional serif font and organized, clickable images with “cutlines” on the home page that are reminiscent of a well-designed Padlet (256). Flaunt’s site presents a very slightly less traditional approach than that of Interview: its Padlet-like home page arrangement is slightly more edgy and disorganized than Interview’s. The navigation bars on both sites have links for art, fashion, and music. But Flaunt’s has links for “People” and “Parties” and eschews Interview’s “Film” link in favor of “Video.” “Detox” is important enough to Flaunt readers to warrant inclusion, and both sites let readers access a selection of items that are for sale. Ultimately, it’s up to the viewers/readers to decide which site is cool and which one is trying too hard. Regardless of the final judgement, the similarities are too strong to overlook.

Gaining a solid foundation in the lingo of the magazine will be useful for the upcoming assignment. Instead of fumbling around with words like “thingy” or “stuff,” I can use logo, mission statement, pull-quote and cutline without worrying that classmates might take my use of the word “dingbat” personally (256).

Blog Post 3

The Power of Participatory Video

Alison Cardinal: “Participatory Video: An Apparatus for Ethically Researching Literacy, Power and Embodiment.”

Reading this article, I found myself wishing that the author had defined the term “embodiment.” While I know what the term means, knowing in what sense the author was using “embodiment” would have been helpful from the start. Since the meaning of a word is frequently context-dependent, this lends visual media a strength that static texts lack. In video, context is visible and aural; the speaker’s pronunciation—intonation, cadence, and stress—conveys meaning. Although context can be supplied with text, it may be difficult to ascertain precise connotation if the reader is not supplied with a definition; we cannot hear inflection in written text. In other words, I think video can sometimes convey implied meaning more efficiently and thoroughly than the written word. The most convincing argumentative support is often personal experience, which is why eyewitnesses are so compellingly convincing to jurors. But participatory video is more than a tool to convey ideas or create knowledge. It can help reset the balance of power between the producer and the consumer.

Another way in which video addresses the balance of power is as a means to record survey responses, an excellent way to mitigate the “observer expectancy” influence. It may also enable a researcher to capture more information about a respondent: as I mentioned above, we communicate through inflection and body language as well. This aligns with what the author calls “participatory design,” which involves the user of the apparatus/space/resource in the  design of the apparatus/space/resource (Cardinal 36). For example, creating spaces that are usable and as free from gender bias and other forms of discrimination as possible requires considering the needs of a wide array of users. This is similar to UX, or user experience, which includes the user’s needs in the design process to best accommodate a wide variety of individuals: “[u]ser experience work is commonly project based, even when ongoing, and frequently involves multiple stakeholders from within and outside the organization” (Sandra Hirsh, Information Services Today: An Introduction, pp. 171, 179).Thus, both processes (participatory design and UX) are indicative of researcher/designer respect for the subject. Furthermore, in participatory design, the apparatus used to capture the information aids in conveying equitable status to the user as co-creator “by being inclusive of embodied knowledge-construction” (Cardinal 36).

Participatory video (PV) is another way that “intersectional feminist researchers account for the positionality of the researcher and suggest that the collaborative construction of knowledge with participant/collaborators is a more ethical approach” (35). As Cardinal explains it, “my research suggests video does not just capture a reality but constructs one, and the researcher needs to actively design a method of data collection that works against Western ways of knowledge-production” (35). In this YouTube video, titled “Participatory Video is a Revolutionary Tool,” Samwel Nangiri of Tanzania reveals ways that PV aids activism:

(InsightShare, 15 Nov. 2019)

PV is increasingly utilized by marginalized people the world over to heighten the impact of their message, to increase communication, and to spread the word. PV “democratizes the production of stories and products by including more opportunities for marginalized people to participate” (Cardinal 36). Not only can this lend power to the disenfranchised, it increases literacy—both digital and traditional.

Whether communicating digitally or traditionally, one thing creators/composers must keep in mind is their audience, whether “imagined or real” (Cardinal 43). Referencing Sara Kindon, Cardinal argues that “The boundaries between researcher/researched, audience, and apparatus are slippery, and they change based on emerging assemblages” (43). This is a part of the “democratizing” abilities of PV: to reset the power balance in ways that written communication is not capable of doing.

Many of us may feel like an “outsider” when entering the academic world. How much more alienating it must be for a person who has been marginalized by skin color or ableness. PV, participatory design, and UX all seek to include the marginalized, and PV gives creators greater power to influence their audience as they decide what audiences hear and see. A deeper connection may be established by utilizing viewers’ eyes and ears.  A multitude of choices, some involving creators’ bodies, influence and shape both the message and audience perception, while helping to engages viewers’ hearts and minds in ways that static text cannot.



Blog Post 2

Beyond Writing: Voice and Rhetoric in the Digital Age

I have devalued orality, placing it beneath writing. Whether as a natural tendency due to my lack of developed verbal skills, or as a product of a system of education which valued writing over aural expression. Cynthia L. Selfe references Jacqueline Jones Royster’s “When the First Voice You Hear Is Not Your Own.” My deep respect for Royster’s work is based not just on the power of her message, but on the strength of her well-formulated argument, couched in the language of academia. For Royster, “voice” is multi-faceted, more than a definition of “authority” (31). While I wholeheartedly agreed with Royster’s message, I also admired the structure and scholarly tone of her essay and found myself thinking it would be an excellent blueprint to follow for creating strong arguments of my own.

My admiration for the structural integrity of Royster’s article is indicative of the notion that composition is solely a written activity, which conflicts with Selfe’s major premise: that “multiple modalities of expression” have value (8). Selfe argues that we “need to pay attention to both writing and aurality” (8).  If we “ignore the history of rhetoric and its intellectual inheritance, … we also limit, unnecessarily, our scholarly understanding of semiotic systems” (618). Ignoring the value of all forms of communication does a disservice to the emergent possibilities that technology holds for learning. As social justice issues (hopefully) become increasingly central to class discussions across all academic disciplines and a vital aspect of curriculum development, integrating other forms of communication into the classroom serves to increase equity, participation, and respect for cultural diversity

Selfe states that “voice” became increasingly linked to writing, and that “speaking and listening in composition classrooms was identified as improved writing” (630-31; 634). Written communication skills became increasingly vital to economic success as the “growing middle class,” scientific progress, and the growth of manufacturing in the mid-1850s drove shifts in education and society (Selfe 620). Noting the dichotomy that existed between teaching methods (oral lectures) and student assignments (written essays), “[b]y the end of the nineteenth century …. English studies faculty still lectured and students still engaged in some oral activities,” but students were expected to respond in writing to material presented orally (626-627).

Selfe argues “that every teacher and student understands [that] power and aurality are closely linked” (634). Minority cultures used aurality as a means to preserve history and for storytelling (624).  Aurality “persisted in black communities in verbal games, music, [and] vocal performance” (624). Call-and-response singing, “scat” singing in jazz, and Rap are examples of the verbal dexterity and creative resilience of African American communities subjugated by slavery and oppressed by racism. Literacy became a tool used by the dominant white society to keep persons of color powerless (624).

Societal shifts in the last twenty years have contributed to the rise of different modes of communication. With the increasing accessibility and ease of use of technology, “technology scholars” have encouraged “multimodal composing” (638). Software such as Audacity, “low-cost and portable technologies of digital audio recording,” and easy-to-use digital video recording and editing software all have helped the resurgence of communication through other means beyond the written word (638).

In “Composing for Sound: Sonic Rhetoric as Resonance,” authors Mary E. Hocks and Michelle Comstock maintain that “in 2006 we defined sonic literacy as “the ability to identify, define, situate, construct, manipulate, and communicate our personal and cultural soundscapes (136). In other words, sonic literacy demands that we attune to aurality in explicit and exacting ways. The authors also argue that “sonic rhetoric can be characterized as embodied and dynamic rhetorical engagements with sound” (136). Thus, both sonic literacy and sonic rhetoric are more than passive listening; sound can be a means to an end, a part of the rhetorical toolbox that can be used to create, support and incorporate persuasive arguments. This necessitates “particular [different] ways of listening”; it demands a “listener-centric approach” to “sonic rhetorical engagement” (136). Composition of such works, therefore, requires thinking of audience—and setting—in different ways.

Different approaches to listening were identified by Pierre Schaeffer as a way for listeners to strip away unnecessary distractions in order “to hear in a new way” (139). From Schaeffer’s work, his student, Michel Chion defined three “modes of listening”: “causal, semantic, and reduced” (139). Useful as an to aid critical listening, these modes of listening function in complementary yet distinct ways. Causal listening acknowledges that “sound is supplementary,” as the listener attends to “the source,” seeking “information” (139). Semantic listening “interprets for code or meaning” (139). “‘Reduced listening,’ …. is bracketing the first two modes and making the sound itself the object” (139-140). They differ in the ways that the listener attends to and ultimately interprets aural perceptions. Just as written text can be interpreted through a multitude of lenses, modes of listening offer listeners interpretive frameworks from which to discuss the aural experience that provides a “vocabulary” with which to express and interpret the listening experience which includes (but goes beyond) writing about aural work.

New possibilities for scholarly expression are fueling imaginative approaches to assignments. Tech offers ways to better include marginalized populations. For example, having students create an argument that must be communicated visually may give non-deaf individuals some small insight into what it is like to convey meaning without the use of sound. Or having students produce work that is solely aural could provide a deeper meaning to the term “voice.” Learning how the privileging of written communication has served to exclude populations of color from society, or learning how technology is aiding to remove cultural bias from library classifications, can give students opportunities to increase tech proficiency while addressing issues such as racism, oppression, and exclusion.

Blog Post 1

When I think of digital publishing, I generally think of websites, or electronic books and magazines. For me, oddly enough, the considerations of scholarly publishing were secondary. I say oddly because I greatly appreciate—and benefit from—online library database access. Although I realized the many upsides offered by digital access to scholarly material, I gave little thought to any of the downsides. Drawbacks of digital content include maintenance costs, copyright protections which may limit availability or access, or lack of context which may diminish the depth of discussion around a particular topic, as pointed out by “the value of the issue as an intellectual form” (“English: The Future of Publishing” Baxter 87). Baxter’s point is similar to one that I find missing with digital delivery of music. Listening to random songs lessens the impact of the artist’s vision for an album as an integral whole.

As Linda Bree points out, rumors of the “‘death of the monograph’” have been greatly exaggerated (“English” 96). Although it seems that academia is shrinking in some ways, and there are barriers (or resistance) to digitization of scholarly materials, open access and digital publishing are not going away, and the internet opens up research possibilities in new and productive ways. White paper access, working papers, conference papers; the “opportunities for experimentation and invention are likely, in fact, to increase” (Baxter 89).

Experimenting and inventing seems to be built into the objectives of the PWW initiative. In the article “Publishing Without Walls: Building a Collaboration to Support Digital Publishing at the University of Illinois,” Green highlights Kim Gallon’s viewpoint: Gallon points out the possibilities for “recovery” that digital access offers and the ability of digital humanities “to restore the humanity of black people lost and stolen through systemic global racialization” (“Publishing Without Walls” 24). Green maintains that it is time to “[c]reate a new model for the conceptual development of scholarly communication” (25).

Often at the forefront of the development of innovative approaches to digital content, libraries are critically aware of the need for new ways to present and organize digital artifacts. Librarians, or more accurately, information professionals are working to remove the implicit bias, inequality, and marginalization of minority populations inherent in the information management systems of the past. The Scholarly Commons, “an interdisciplinary digital scholarship center in the University of Illinois Library,” is one project that is working to overcome the limitations of mainstream cataloging techniques and create opportunities for collaboration built on respect for traditional sources of knowledge, sources that may not fit into categories created by the dominant culture (Green 27). The goal is to support work that is “dynamic, interactive, and always in conversation with the world around us” (33). Hooray for libraries!

Libraries immediately understood the impact that technology would have on the dissemination and creation of information, and the possibilities for access to knowledge that digital content offers. Thus, libraries are continuing to change the information landscape through increasing involvement in digital publication (Melton 95-96). In the book chapter titled “The Center That Holds: Developing Digital Publishing Initiatives at the Emory Center for Digital Scholarship,” Melton states that “library publishing is becoming increasingly common in academic libraries” (96). Melton focuses primarily on the activities of the Emory Center for Digital Scholarship (ECDS) (97). From platforms such as Drupal and WordPress, the ECDS is helping broaden the boundaries of digital scholarship.

One of the most interesting developments which fully utilizes the possibilities of technology, the interactive “mapping initiative” called ATLMaps “offer[s] a framework that incorporates storytelling reliant on geospatial data” (Melton 100). Offering faceted viewpoints and presenting different historical perspectives of locations and the information attached to them, ATLMaps gives new meaning to “place,” linking digital space to real-world locations. With publicly available source code, anyone can contribute: ATLMaps is “a project that invites crowdsourced contributions” (100). To me, it sounds like an empowering opportunity to give marginalized voices a way to be heard that goes beyond “ownership” of property. Zotero, GitHub, Readux and other tools offer almost endless possibilities to expand the horizons of digital scholarship.

Making certain that the rigorous standards demanded by academic scholarship are met, libraries are working as intermediaries between the worlds of academia and the general public to disseminate information in new and exciting ways.