Interview with Evan Hughes, author of Literary Brooklyn

Literary Brooklyn
Evan Hughes; Holt Paperbacks

Do you think writers of the caliber of Whitman, Baldwin, Marianne Moore, etc are writing in Brooklyn today?

That’s very difficult to say. I think, by and large, the passage of time does a better job than critics or journalists (or, say, me) of sorting out who the lasting and important voices really are. It might be particularly hard in the case of current Brooklyn writers in that a lot of the most well-known are quite young: Lahiri, Egan, Whitehead, Foer, Englander, Arthur Phillips, etc. All under 50 I think, sometimes well under. So important work lies ahead. But also there are always interesting writers out there who we don’t know about yet. Maybe they are immigrants or working-class men and women living in Brighton Beach or Coney Island and they haven’t managed to catch the notice of major publishers yet. Or maybe they don’t even know they’re important writers yet. Some of these people will emerge, just like, say, Kazin and Fuchs and Malamud did, coming from very poor environments.

Were there writers you thought of including? I wondered about lesser writers like Erich Segal, author of Love Story, and Chaim Potok, author of My Name is Asher Lev, for example.

There are a lot of people I considered but didn’t include, or mentioned only briefly. It was painful! You might also have mentioned Lovecraft, Gilbert Sorrentino (in there, but briefly), Asimov, George Oppen. But we wanted to keep the book to a certain length and keep it moving along. It’s not meant to be just a completist survey of Brooklyn lit. It’s meant to tell the story of BK through literature. So if one writer covered a certain theme or era well, that sometimes meant others in the same vein were underplayed.

Have you read MFA vs NYC? (excerpted here)? Any thoughts about this quote?

And the NYC writer, because she lives in New York, has constant opportunity to intuit and internalize the demands of her industry. It could be objected that just because the NYC writer’s editor, publisher, agent, and publicist all live in New York, that doesn’t mean that she does, too. After all, it would be cheaper and calmer to live most anywhere else. This objection is sound in theory; in practice, it is false. NYC novelists live in New York—specifically, they live in a small area of west-central Brooklyn bounded by DUMBO and Prospect Heights. They partake of a social world defined by the selection (by agents), evaluation (by editors), purchase (by publishers), production, publication, publicization, and second evaluation (by reviewers) and purchase (by readers) of NYC novels. The NYC novelist gathers her news not from Poets & Writers but from the Observer and Gawker; not from the academic grapevine but from publishing parties, where she drinks with agents and editors and publicists. She writes reviews for Bookforum and the Sunday Times. She also tends to set her work in the city where she and her imagined reader reside: as in the most recent novels of Shteyngart, Ferris, Galchen, and Foer, to name just four prominent members of The New Yorker’s 20-under-40 list.

I have read it. I think the passage you quote is not strictly accurate, but I think Harbach was not trying to be strictly accurate–intentionally he’s creating an exaggerated binary between the MFA writer and the NYC writer. And you know, I think he’s actually on to something. (Maybe I should mention that I know and am friendly with Harbach.) It does seem just true, doesn’t it, that novelists can live anywhere but a huge number of them live in Brooklyn to be a part of the literary scene that exists here. I think in some ways that’s a natural response to the loneliness of being a writer–you want a tribe. I don’t think that’s something to be lamented, though it’s not something that everyone celebrates either–it can lead to an exclusionary feeling if someone feels her work is not being seen just because she doesn’t go to the Brooklyn parties.

Any response to the article in The American Scholar?

I didn’t think that was a very good or persuasive piece. Maybe I’m biased. But it did seem suspect that he had to drag in many people who do not live in Brooklyn at all (Michael Chabon, Dave Eggers, Benjamin Kunkel) to make his strangely vindictive case. I also think he’s just factually wrong that tons of contemporary books are set in Brooklyn. It’s actually something of a disappointment to me that many interesting Brooklyn authors have not set books here.

Seems like the book would be a great companion to a walking tour and/or pictorial rendition.

I have given a couple of walking tours as events for the book and it was a lot of fun. Some day I’ll do another I’m sure. I plan to incorporate some pictures for talks I’m doing at the Brooklyn Public Library (Dec 14 at 7 pm) and the Mid-Manhattan Library (Jan 5, 6:30 pm).

As a librarian, am curious to know: What was your best research aid? Did the Brooklyn Historical Society and the Brooklyn Collection of the Brooklyn Public Library have the majority of what you needed?

The Brooklyn Collection was very helpful. They have some open shelves that can help you discover books you might not come across just by googling or using Amazon. And so was the Brooklyn Historical Society, which also has a nice syllabus of Brooklyn reading that they’ll give you. I should also point out that the book is not a scholarly work in the sense that I did not dig into archives of primary sources (except rarely) in order to discover facts that no, say, Whitman scholar had ever found. It was much more about synthesizing the work of scholars and uniquely focusing on the relationship between Brooklyn and the writers’ work and lives.

Did you have any models in mind for the history of city life? Thought this was a unique and clever way of writing literary history.

You know, actually I don’t know of anything very similar. I have heard there is some crossover with a recent book on Paris by Graham Robb [ed. note Parisians]. But in terms of using literature and its themes to tell history, one great model is a book I wrote about, On Native Grounds, by Alfred Kazin.

Personal Digital Archiving

I presented about Constructing a Digital Identity Compatible with Institutional Archives at this excellent conference at the Internet Archive.

Personal Archiving Systems and Interfaces for Institutions. What are the experiences and design decisions of institutions that have built systems for personal digital archives?

SPARC Innovation Fair

At the podium

From a brief talk given 8 November at the SPARC 2010 Digital Repositories Forum:
Hello, I’m Jay Datema, associate director at the Bern Dibner Library, Polytechnic Institute of NYU. I’m honored to be included in this year’s Innovation Fair at the SPARC conference. I have two minutes, so I’ll keep it short.

My poster is entitled “Full Circle Research: Occam’s Razor for Collection.” As many of you know, Occam’s Razor is a principle taken from the philosopher William of Ockam, who posited that “when several theories model the available facts adequately, the simplest theory is to be preferred.”  This principle dates back to the 1300s, so it’s had some time to prove itself. Institutional repositories, on the other hand, are just a decade old.

Simply stated, my poster shows that research is a process that starts with an analysis of publications, which of course will then produce more publications. As Samuel Johnson said, “The greatest part of a writer’s time is spent in reading, in order to write; a man will turn over half a library to make one book.” What is the online equivalent? I suppose it would have to be endless surfing of bibliographies, databases, and PDFs. Research only ends when your attention span falters or a deadline awaits.
Continue reading

Evolution not Revolution

Swimming in salt water is wonderful; drinking it is not. Four hundred years ago, the first American settlers in Jamestown, Virginia, ran into troubles during their first five years because the fresh water they depended upon for drinking turned brackish in the summer. Suddenly, besides the plagues, angry Indians, and crop difficulties, they had to find new sources of fresh water inland. Libraries and publishers are facing a similar challenge as the hybrid world of print and online publications have changed the economic certainties that have kept both healthy.

The past five years in the information world have been full of revolutionary promise, but the new reality has not yet matched the promise of a universal library. Google Scholar promised universal access to scholarly information, yet its dynamic start in 2004 has not brought forth many new evolutionary changes since its release. In fact, the addition of Library Links using OpenURL support is the last newest major feature Scholar has seen. The NISO standard that enables seamless full-text access has shown its value.

For years, it’s been predicted that the Google Books project would revolutionize scholarship, and in some respects it has done so. But in seeking a balance between cornering Amazon’s market for searching inside books, respecting authors’ rights, finding the rights holders of so-called orphan works, and solving metadata and scanning quality issues, its early promise is not yet fulfilled.
Continue reading

The Information Bomb and Activity Streams

In 1993, Yale computer science professor David Gelertner opened a package he thought was a dissertation in progress. Instead, it was a bomb from the Unabomber, who had written in his manifesto that “Technological society is incompatible with individual freedom and must therefore be destroyed and replaced by primitive society so that people will be free again.” Though Kaczynski’s point was lost when attached to violence, it’s ironic that his target was a computer science professor who professed not to like computers, the tool of a technological society.

In addition, in one of the dissertations Gelertner supervised, Eric Thomas Freeman proposed a new direction for information management. Freeman argued that “In an attempt to do better we have reduced information management to a few simple and unifying concepts and created Lifestreams. Lifestreams is a software architecture based on simple data structure, a time-ordered stream of documents, that can be manipulated with a small number of powerful operators to locate, organize, summarize, and monitor information.” Thus, the stream was born of a desire to answer information overload.

While Freeman anticipated freedom from common desktop computing metaphors, the Web had not reached ubiquity 12 years ago. His lifestreams principles live on in the software interfaces of twitter, delicious, Facebook, and FriendFeed. But have you tried to find a tweet from three months ago? How about something you wrote on Facebook last year? And FriendFeed discussions have no obvious URL, so there’s no easy way to return to the past. This planned obsolescence is by design, and the stream comes and goes like an information bomb.

In The Anxiety of Obsolescence, Pomoma College English professor Kathleen Fitzpatrick says that “The Internet is merely the latest of the competitors that print culture has been pitted against since the late nineteenth century. Threats to the book’s presumed dominance over the hearts and minds of Americans have arisen at every technological turn; or so the rampant public discourse of print’s obsolescence would lead one to believe.” Fitzpatrick goes on to say that her work is dedicated to demonstrating the “peacable coexistence of literature and television, despite all the loud claims to the contrary.” This objective is a useful response to the usual kvetching about the utter uselessness of the activity stream of the day.

A Standard for Sharing

Now popularized as activity streams, the flow of information has gained appeal because it gives users a way to curate their own information. Yet there is no standard way for this information to be recast by the user or data providers in a way that preserves privacy or archival access.

Chris Messina has advocated for social network interoperability, and suggests that “with a little effort on the publishing side, activity streams could become much more valuable by being easier for web services to consume, interpret and to provide better filtering and weighting of shared activities to make it easier for people to get access to relevant information from people that they care about, as it happens.” Messina points out that the activity stream “provide what all good news stories provide: the who, what, when, where and sometimes, how.”

In the digital age, activity streams could be used as a way to record interactions with scholarly materials. Just as COUNTER and Metrics from Scholarly Use of Electronic Records (MESUR) record statistics about how journal articles are viewed, an activity stream standard could be used to provide context around browsing.

For example, Swarthmore has a fascinating collection of W.H. Auden incunabula. You can see what books he checked out, the books he placed on reserve for his students, and even his unauthorized annotations, including his exasperated response on his own work, “Oh God, what rubbish.” What seemed ephemeral is a fascinating exercise in tracing the thought of a poet in America at a crucial period in his scholarly development. If we had captured what Auden was listening to, reading, and attending at the same time, what a treasure trove it would be for biographers and scholars.

The Appeal of Activity Streams

In 2007, Dan Chudnov wrote in Social Software: You Are an Access Point, “There’s a downside to all of this talk of things “social.” As soon as you become an access point, you also become a data point. Make no mistake-Facebook and MySpace wouldn’t still be around if they couldn’t make a lot of money off of each of us, so remember that while your use of these services makes it all seem better for everybody else, the sites’ owners are skimming profit right off the top of that network effect.” How then can the user access and understand their own streams and data points?

Maciej Ceglowski, former Mellon Foundation grant officer and Yahoo engineer, has founded an antisocial bookmarking service called Pinboard which safeguards user privacy over monetization and sharing features. One of its appealing features is placing the user at the center of what they choose to share, without presuming that the record is open by default. In fact, bookmarks can be made private with ease.

In The Information Bomb, Paul Virilio wrote that “Digital messages and images matter less than their instantaneous delivery: the shock effect always wins out over the consideration of the informational content. Hence the indistinguishable and unpredictable character of the offensive act and the technical breakdown.” Users can manage or drown in the stream. To safeguard this information, users should push for their own data to made available so that they can make educated choices.

With the well-founded Department of Justice inquiry into the Google Book project about monopoly pricing and privacy, libraries can now ask for book usage information. Just as position information enables the Hathi Project to provide full-text searchability, usage information would give libraries a way to better serve patrons, and to give special collections a treasure trove of information.

Annotating Video

It seems that everything’s available online, except the ability to search for particular video scenes. Recently, I was searching for an actress I’d last seen in a film 15 years ago and imdb.com was no help. I eventually found Lena Olin by watching the credits, but the experience made me wonder if video standards could aid the discovery process.

In a conversation last year, Kristen Fisher Ratan of Highwire Press wondered if there was a standards-based way to jump to a particular place in a video, which YouTube currently offers through URL parameters. This is an obvious first step for citation, much as the page number is the lingua franca of academic citations and footnotes. And after a naming convention is established, the ability to retrieve passages and to optimize by searching strings is a basic requirement for all video applications.

Josh Bernoff, a Forrestor researcher, is quite skeptical about video standards, saying, “Don’t expect universal metadata standards. Standards will develop around discrete applications, driven primarily by distributors like cable and satellite operators.” While this is likely true of the present, use of established markup languages like RDF using relevant subsets of Dublin Core extensions could enable convergence. As John Toebes, Cisco chief architect, wrote for the W3C Video on the Web workshop, “Industry support for standards alignment, adoption, and extension would positively impact the overall health of the content management and digital distribution industry.”

Existing Models
It’s useful to examine the standards that have formed around still images, since there is a mature digital heritage for comparisons. NISO’s Standard and Data Dictionary for Digital Still Images, known as MIX, is a comprehensive guide for defining the fields that are in use for managing images.

IPTC and EXIF standards for images have the secondary benefit of embedding metadata so that information is added at the point of capture in a machine-readable format. However, many images, particularly historical ones, need metadata to be added. Browsing Flickr images gives an idea of the model; camera information comes from the EXIF metadata, and IPTC can be used to capture rights information. However, tags and georeferencing is typically added after the image has been taken, which requires a different standard.

Fotonotes is one of the best annotation technologies going, and has been extended by Flickr and others to give users and developers the ability to add notes to particular sections of an image. The annotations are saved in an XML file, and are easily readable, if not exactly portable.

The problem
For precise retrieval, video requires either a text transcript or complete metadata. Jane Hunter and Renato Iannella did an excellent job of proposing a model system for news video indexing using RDF and Dublin Core extensions in their proposal, now ten years old. There has been some standardization around the use of Flash and MPEG standards for web display of video, which narrows the questions just as PDF adoption standardized journal article display.

With renewed interest in Semantic Web technologies from the Library of Congress and venture capital investors, the combination of Dublin Core extensions for video and the implementation of SMIL (pronounced smile) may be prime territory for mapping to an archival standard for video.

Support is being built into Firefox and Safari, but the exciting part of SMIL is that it can reference metadata from markup. So, if you have a video file, metadata about the object, a transcript, and various representations (archival, web, and mobile encodings of the file), SMIL can contain the markup for all of these things. Simply stated, SMIL is a text file that describes a set of media files and how they should be presented.

Prototypes on the horizon
Another way of obtaining metadata is through interested parties or scholars collaborating to create a shared pool of information to reference. The Open Annotation Collaboration, just now seeking grant funding, and featuring Herbert van de Sompel and Jane Hunter as investigators, seeks to establish a mechanism for client-side integration of video snippets and text as well as machine-to-machine interaction for deeper analysis and collection.

And close by is a new Firefox add-on, first described in D-Lib as NeoNote, which promises a similar option for articles and videos. One attraction it offers is the ability for scholars to capture their annotations, share them selectively, and use a WebDAV server for storage. This assumes a certain level of technical proficiency, but the distributed approach to storage has been a proven winner in libraries for many years now.

The vision
Just as the DOI revolutionized journal article URL permanence, I hope for a future where a video URL can be passed to an application and all related annotations can be retrieved, searched, and saved for further use. Then, my casual search for the actress in The Reader and The Unbearable Lightness of Being will be a starting point for retrieval instead of a journey down the rabbit hole.

Are You Paying Attention?

Not for the first time, the glut of incoming information threatens to push out useful knowledge into merely a cloud of data. And there’s no doubt that activity streams and linked data are two of the more interesting things to aid research in this onrushing surge of information. In this screen-mediated age, the advantages of deep focus and hyper attention are mixed up like never before, since the advantage accrues to the company who can collect the most data, aggregate it, and repurpose it to willing marketers.

N. Katherine Hayles does an excellent job of distinguishing between the uses of hyper and deep attention without privileging either. Her point is simple: Deep attention is superb for solving complex problems represented in a single medium, but it comes at the price of environment alertness and flexibility of response. Hyper attention excels at negotiating rapidly changing environments in which multiple foci compete for attention; its disadvantage is impatience with focusing for long periods on a noninteractive object such as a Victorian novel or complicated math problem.

Does data matter?

The MESUR project is one of the more interesting research projects going, now living on as a product from Ex Libris called bx. Under the hood, MESUR looks at the research patterns of searches, not simply the number of hits, and stores the information as triples, or subject-predicate-object information in RDF, the resource description framework. RDF triple stores can put the best of us to sleep, so one way of thinking about it is smart filters. Having semantic information available allows computers to distinguish between Apple the fruit and Apple the computer.

In use, semantic differentiation gives striking information gains. I picked up the novel Desperate Characters, by Paula Fox. While reading it, I remembered that I first heard it mentioned in an essay by Jonathan Franzen, who wrote the foreward to the edition I purchased. This essay was published in Harper’s, and the RDF framework in use on harpers.org gave me a way to see articles both by Franzen, as well articles that were about him. This semantic disambiguation is the obverse of the firehose of information that is monetized from advertisements.

Since MESUR is pulling information from CalTech and Los Alamos National Laboratory’s SFX link resolver service logs, there’s a immediate relevance filter applied, given the scientists who are doing research in those institutions. Using the information contained in the logs, it’s possible to see if a given IP address belonging to faculty or department) goes through an involved research process, or a short one. The researcher’s clickstream is captured, and fed back for better analysis.  Any subsequent researcher who clicks on a similar SFX link has a recommender system seeded with ten billion clickstreams. This promises researchers a smarter Works Cited, so that they can see what’s relevant in their field prior to publication. Competition just got smarter.

Standards based way of description

Attention.xml, first proposed in 2004 as an open standard by Technorati technologist Tantek Celik and journalist Steve Gilmor, promised to give priority to items that users want to see. The problem, articulated five years ago, was that feed overload is real, and the need to see new items and what friends are also reading requires a standard that allows for collaborative reading and organizing.

The standard seems to have been absorbed into Technorati, but the concept lives on in the latest beta of Apple’s browser Safari, which lists Top Sites by usage and recent history, as does Firefox’s Speed Dial. And of course, Google Reader has Top Recommendations, which tries to leverage the enormous corpus of data it collects into useful information.

Richard Powers’ novel Galatea 2.2 describes an attempt to train a neural network to recognize the Great Books, but finds socializing online to be a failing project: “The web was a neighborhood more efficiently lonely than the one it replaced. Its solitude was bigger and faster. When relentless intelligence finally completed its program, when the terminal drop box brought the last barefoot, abused child on line and everyone could at last say anything to everyone else in existence, it seemed to me we’d still have nothing to say to each other and many more ways not to say it.” Machine learning has its limits, including whether the human chooses to pay attention to the machine in a hyper or deep way.

Hunch, a web application designed by Caterina Fake, known as co-founder of Flickr, is a new example of machine learning. The site offers to “help you make decisions and gets smarter the more you use it.” After signing up, you’re given a list of preferences to answer. Some are standard marketing questions, like how many people live in your household, but others are clever or winsome. The answers are used to construct a probability model, which is used when you answer “Today, I’m making a decision about…” As the application is a work in progress, it’s not yet a replacement for a clever reference librarian, even if its model is quite similar to the classic reference interview. It turns out that machines are best at giving advice about other machines, and if the list of results incorporates something larger than the open Web, then the technology could represent a leap forward. Already, it does a brilliant job at leveraging deep attention to the hypersprawling web of information.

How to Achieve True Greatness

Privacy has long returned to norms first seen in small-town America before World War II, and our sense of self is next up on the block.  This is as old as the Renaissance described in Baldesar Castiglione’s The Book of the Courtier and as new as twitter, the new party line, which gives ambient awareness of people and events.

In this age of information overload, it seems like a non sequitur that technology could solve what it created. And yet, since the business model of the 21st century is based on data and widgets made of code, not things, there is plenty of incentive to fix the problem of attention. Remember, Google started as a way to assign importance based on who was linking to who.

This balance is probably best handled by libraries, with their obsessive attention to user privacy and reader needs, and librarians are the frontier between the machine and the person. The open question is, will the need to curate attention be overwhelming to those doing the filtering?

Galatea 2.2 Galatea 2.2 Richard L. Powers; Farrar, Straus, Giroux 1995

Lock-in leads to lockdown

What goes up must come down. This simple law of gravity can been seen in baseball, and these days, the stock market.

As I attended the Web 2.0 conference in New York recently, I had occasion to ask Tim O’Reilly what he thought about libraries. “Well, OCLC’s doing some good things,” he said. I encouraged him to continue looking at library standards, as the 2006 Reading 2.0 conference pulled together a number of interesting people who have been poking at the standards that knit libraries and publishers together.

But the phrase Web 2.0, coined by O’Reilly, was showing signs of age. From the halycon days, where every recently funded website showed rounded corners and artful form submission fades, the new companies were a shadow of their former booth size. Sharing space with the Interop conference, Web 2.0 was the bullpen to the larger playing field.

Interoperability

What helps companies to grow and expand? Some posit that the value of software is estimated by lock-in, that is, the number of users who would incur switching costs by moving to a competitor or another platform.

In the standards world, lock-in is antithetical to good functioning. Certainly proprietary products and features play a role to keep innovation happening, but cultural institutions are too important to risk balkanization of data for short-term profits.

Trusted peers

It seems to me that curation has moved to the network level, and a certain amount of democratization is now possible. The cautions about privacy and users as access points are true and useful, but librarians and publishers have a role in recommending information, and this is directly correlated to expert use of recommender systems. Web 2.0 applications like del.icio.us for bookmarks, last.fm for music, and Twitter and Facebook for social networks provide a level of personal guidance that was algorithmically impossible before data was easily collectible.

Prior to last.fm’s 2007 purchase by CBS Music, public collective data about listening habits was deemed “too valuable” to be mashed up by programmers any longer. In the library world, there’s a unique opportunity to give users the ability to see recommendations from trusted people. Though del.icio.us does this quite well for Internet-accessible sources, there’s an opportunity extant for the scholarly publishers to standardize on a method. Elsevier’s recent Article 2.0 conte shows encouraging signs of moving towards a release of control back to the authors and institutions that originally wrote and sponsored the work.

In the end, though, companies that are forced to choose between opening up their data or paying their employees will not likely choose the long-term reward. Part of this difficulty, however, has been tied to the lack of available legal options, standards, or licenses for releasing data into the public domain. The Creative Commons project has pointed many people to defined choices if they choose to enable their works into the public domain or for reuse.

Jonathan Rochkind of Johns Hopkins University points out that “A Creative Commons license is inappropriate for cataloging records, precisely because they are unlikely to be copyrightable. The whole legal premise of Creative Commons (and open source) licenses is that someone owns the copyright, and thus they have the right to license you to use it, and if you want a license, these are the terms. If you don’t own a copyright in the first place, there’s no way to license it under Creative Commons.

The Open Data Commons has released a set of community norms for sharing data. This is a great step towards a standard way of separating profit concerns from the public good, and also frees companies from agonizing legal discussions about liability and best practices.

Standard widgets

If sharing entire data sets isn’t feasible, one practice that was nearly universal in Web 2.0 companies was the use of widgets to embed data and information.

In his prescient entry, “Blogs, widgets, and user sloth,” Stu Weibel describes the difficulty he had installing a widget, a still-depressing reality today.

Netvibes, a company that provides personalized start pages, has proposed a standard for a universal widget API. The jOPAC, an “integrated web widget,” uses this suggestion to make its library catalog embeddable in several online platforms and operating systems. Since widgets are still being used for commercial ventures, there seems to be an opportunity to define a clear method of data exchange. The University of Pennsylvania’s Library Portal is a good example of where this future could lead, as its portal page is flexible and customizable.

Perhaps a widget standard would give emerging companies and established ventures a method to exchange information in a way that promotes profits, privacy, and potential.

Here Comes Everybody Here Comes Everybody: The Power of Organizing Without OrganizationsClay Shirky; Penguin Press HC 2008

WorldCatRead OnlineLibraryThingGoogle BooksBookFinder 

Jhumpa Lahiri • Unaccustomed Earth

Sometimes, a short story sticks with you until you find it with pleasure living in a larger collection. In 1991, I read a short story by Tobias Wolff standing up in a Chicago bookstore that I looked for until it was included in The Night in Question.

Jhumpa Lahiri’s new book of short stories, Unaccustomed Earth, contains another haunting story, “Nobody’s Business,” first published in The New Yorker in 2001. It gives a stark account of graduate student despair—first at life delayed due to years of study, then postponed because of deferred relationships left to explode into messy life. Paul, the narrator, gives an outsider account of Indian courtship rituals drawn into housemate drama. Desperate to prove his innocence of what he learns, he provides telephonic evidence of how she is being betrayed.

Lahiri isn’t afraid to show life as it is. Painful, entangled with family obligations and academic aspirations, the stories show adult parents and children reaching accomodations with hidden truths and adjustments to immigrant life. Her stories show how second-generation Bengali immigrants draw pleasure from their Harvard and MIT PhDs, just as their accomplishments push them away from their families of origin. When the characters marry outside their connections, as in “Only Goodness,” they feel guilt and relief in equal measure.

The final three stories, linked through the characters Hema and Kaushik, give a tragic account of a family left to reconstitute itself after a mother’s early death rips it asunder. Though Lahiri leaves a narrative option for easy closure, the devastating ending feels, well, like life in the midst of death.

Unaccustomed earth Unaccustomed earth: stories

Jhumpa Lahiri; Knopf 2008

WorldCatRead OnlineLibraryThingGoogle BooksBookFinder