Are You Paying Attention?

Not for the first time, the glut of incoming information threatens to push out useful knowledge into merely a cloud of data. And there’s no doubt that activity streams and linked data are two of the more interesting things to aid research in this onrushing surge of information. In this screen-mediated age, the advantages of deep focus and hyper attention are mixed up like never before, since the advantage accrues to the company who can collect the most data, aggregate it, and repurpose it to willing marketers.

N. Katherine Hayles does an excellent job of distinguishing between the uses of hyper and deep attention without privileging either. Her point is simple: Deep attention is superb for solving complex problems represented in a single medium, but it comes at the price of environment alertness and flexibility of response. Hyper attention excels at negotiating rapidly changing environments in which multiple foci compete for attention; its disadvantage is impatience with focusing for long periods on a noninteractive object such as a Victorian novel or complicated math problem.

Does data matter?

The MESUR project is one of the more interesting research projects going, now living on as a product from Ex Libris called bx. Under the hood, MESUR looks at the research patterns of searches, not simply the number of hits, and stores the information as triples, or subject-predicate-object information in RDF, the resource description framework. RDF triple stores can put the best of us to sleep, so one way of thinking about it is smart filters. Having semantic information available allows computers to distinguish between Apple the fruit and Apple the computer.

In use, semantic differentiation gives striking information gains. I picked up the novel Desperate Characters, by Paula Fox. While reading it, I remembered that I first heard it mentioned in an essay by Jonathan Franzen, who wrote the foreward to the edition I purchased. This essay was published in Harper’s, and the RDF framework in use on harpers.org gave me a way to see articles both by Franzen, as well articles that were about him. This semantic disambiguation is the obverse of the firehose of information that is monetized from advertisements.

Since MESUR is pulling information from CalTech and Los Alamos National Laboratory’s SFX link resolver service logs, there’s a immediate relevance filter applied, given the scientists who are doing research in those institutions. Using the information contained in the logs, it’s possible to see if a given IP address belonging to faculty or department) goes through an involved research process, or a short one. The researcher’s clickstream is captured, and fed back for better analysis.  Any subsequent researcher who clicks on a similar SFX link has a recommender system seeded with ten billion clickstreams. This promises researchers a smarter Works Cited, so that they can see what’s relevant in their field prior to publication. Competition just got smarter.

Standards based way of description

Attention.xml, first proposed in 2004 as an open standard by Technorati technologist Tantek Celik and journalist Steve Gilmor, promised to give priority to items that users want to see. The problem, articulated five years ago, was that feed overload is real, and the need to see new items and what friends are also reading requires a standard that allows for collaborative reading and organizing.

The standard seems to have been absorbed into Technorati, but the concept lives on in the latest beta of Apple’s browser Safari, which lists Top Sites by usage and recent history, as does Firefox’s Speed Dial. And of course, Google Reader has Top Recommendations, which tries to leverage the enormous corpus of data it collects into useful information.

Richard Powers’ novel Galatea 2.2 describes an attempt to train a neural network to recognize the Great Books, but finds socializing online to be a failing project: “The web was a neighborhood more efficiently lonely than the one it replaced. Its solitude was bigger and faster. When relentless intelligence finally completed its program, when the terminal drop box brought the last barefoot, abused child on line and everyone could at last say anything to everyone else in existence, it seemed to me we’d still have nothing to say to each other and many more ways not to say it.” Machine learning has its limits, including whether the human chooses to pay attention to the machine in a hyper or deep way.

Hunch, a web application designed by Caterina Fake, known as co-founder of Flickr, is a new example of machine learning. The site offers to “help you make decisions and gets smarter the more you use it.” After signing up, you’re given a list of preferences to answer. Some are standard marketing questions, like how many people live in your household, but others are clever or winsome. The answers are used to construct a probability model, which is used when you answer “Today, I’m making a decision about…” As the application is a work in progress, it’s not yet a replacement for a clever reference librarian, even if its model is quite similar to the classic reference interview. It turns out that machines are best at giving advice about other machines, and if the list of results incorporates something larger than the open Web, then the technology could represent a leap forward. Already, it does a brilliant job at leveraging deep attention to the hypersprawling web of information.

How to Achieve True Greatness

Privacy has long returned to norms first seen in small-town America before World War II, and our sense of self is next up on the block.  This is as old as the Renaissance described in Baldesar Castiglione’s The Book of the Courtier and as new as twitter, the new party line, which gives ambient awareness of people and events.

In this age of information overload, it seems like a non sequitur that technology could solve what it created. And yet, since the business model of the 21st century is based on data and widgets made of code, not things, there is plenty of incentive to fix the problem of attention. Remember, Google started as a way to assign importance based on who was linking to who.

This balance is probably best handled by libraries, with their obsessive attention to user privacy and reader needs, and librarians are the frontier between the machine and the person. The open question is, will the need to curate attention be overwhelming to those doing the filtering?

Galatea 2.2 Galatea 2.2 Richard L. Powers; Farrar, Straus, Giroux 1995

Repurposing Metadata

As the Open Archive Initiative Protocol for Metadata Harvesting has become a central component of digital library projects, increased attention has been paid to the ways metadata can be reused. As every computer project since the beginning of time has had occasion to understand, the data available for harvesting is only as good as the data entered. Given these quality issues, there are larger questions about how to reuse the valuable metadata once it has been originally described, cataloged, annotated, and abstracted.

Squeezing metadata into a juicer
As is often the case, the standards and library community were out in front in thinking about how to make metadata accessible in a networked age. With the understanding that most of the creators of the metadata would be professionals, choices were left about repeating elements, etc., in the Dublin Core standard.

This has proved to be an interesting choice, since validators and computers tend to look unfavorably on the unique choices that may make sense only locally. Thus, as the weblog revolution started in 2000 and became used in even the largest publications by 2006, these tools could not be ignored as a mass source of metadata creation.

Reusing digital objects
In the original 2006 proposal to the Mellon Foundation, Carl Lagoze wrote that “Terms like cyberinfrastructure, e-scholarship, and e-science all describe a concept of data-driven scholarship where researchers access shared data sets for analysis, reuse, and recombination with other network-available resources. Interest in this new scholarship is not limited to the physical and life sciences. Increasingly, social scientists and humanists are recognizing the potential of networked digital scholarship. A core component of this vision is a new notion of the scholarly document or publication.

Rather than being static and text-based, this scholarly artifact flexibly combines data, text, images, and services in multiple ways regardless of their location and genre.”

After being funded, this proposal has turned into something interesting, with digital library participation augmented by Microsoft, Google, and other large company representatives joining the digital library community. Since Atom feeds have garnered much interest and have become a IETF recommended standard, there is community interest in bringing these worlds together. Now known as the Open Archives Initiative for Object Reuse and Exchange (OAI-ORE), the alpha release is drawing interesting reference implementations as well as criticism for the methods used to develop it.

Resource maps everywhere
Using existing web tools is a good example of working to extend rather to invent. As Herbert van de Sompel noted in his Fall NISO Forum presentation, “Materials from repositories must be re-usable in different contexts, and life for those materials starts in repositories, it does not end there.” And as the Los Alamos National Laboratory Library experiments have shown, the amount of reuse that’s possible when you have journal data in full-text is extraordinary.

Another potential use of OAI-ORE beyond the repositories it was meant to assist can be found in the Flickr Commons project. With pilot implementations from the Library of Congress, the Powerhouse Museum and the Brooklyn Museum, OAI-ORE could play an interesting role in aggregating user-contributed metadata for evaluation, too. Once tags have been assigned, this metadata could be collected for further curation. In this same presentation, van de Sompel showed a Flickr photoset as an example of a compound information object.

Anything but lack of talent
A great way to understand the standard is to see it in use. Michael Giarlo of the Library of Congress developed a plugin for WordPress, a popular content management system that generates Atom. His plugin generates a resource map that is valid Atom and which contains some Dublin Core elements, including title, creator, publisher, date, language, and subject. This resource map can be transformed into RDF triples via GRDDL, which again facilitate reuse by the linked data community.

This turns metadata creation on its head, since the Dublin Core elements are taken directly from what the weblog author enters as the title, the name of the weblog author, subjects that were assigned, and the date and time of the entry. One problem OAI-ORE problem promises to solve is how to connect disparate URLs into one unified object, which the use of Atom simplifies.

As the OAI-ORE specification moves into beta, it will be interesting to see if the constraints of the wider web world will breathe new life into carefully curated metadata. I certainly hope it does.

Is there a bibliographic emergency?

The Bibliographic Control Working Group held its third and final public meeting on the future of bibliographic control July 9 at the Library of Congress, focusing on “The Economics and Organization of Bibliographic Data.” The conclusion of the meetings will come in a report issued in November. No dramatic changes were issued from this meeting, and public comment is invited until the end of July.

With several panels, invited speakers, and an open forum (including a public webcast), Deanna Marcum, Library of Congress associate librarian for library services, framed the discussion by saying “Worries about MARC as an international standard make it seem like we found it on a tablet.” She went on to say, “Many catalogers believe catalogs…should be a public good, but in this world, it’s not possible to ignore economic considerations.” Marcum said there is no LC budget line that provides cataloging records for other libraries, though the CIP program has been hugely successful.

Value for money
Jose-Marie Griffiths, dean of the library school at the University of North Carolina, Chapel Hill, said there are three broad areas of concern: users and uses of bibliographic data, different needs for the data, and the economics and organization of the data. “What does free cost?” she asked, “Who are the stakeholders, and how are they organizationally aligned?”

Judith Nadler, library director, University of Chicago, moderated the discussion and said the format of the meetings was based on the oral and written testimony that was used to create Section 108 of the Copyright Law. Nadler joked that “We will have authority control–if we can afford it.”

Atoms vs bits
Rick Lugg, partner, R2 Consulting, has often spoken of the need for libraries to say no before saying yes to new things. His Powerpoint-free (at Marcum’s request–no speakers used it) presentation focused on how backlogs are invisible in the digital world. “People have difficulty managing what they have,” Lugg said, “There is a sense of a long emergency, and libraries cannot afford to support what they are doing.”

Using business language, since R2 often consults for academic libraries on streamlining processes, Lugg said libraries are not taking advantage of the value chain. Competitors are now challenging libraries in the area of search, even as technical services budgets are being challenged.

In part, Lugg credited this pressure to the basic MARC record becoming a commodity, and he estimated the cost of an original cataloged record to be $150-200. He challenged libraries to abandon the “cult of perfection,” since “the reader isn’t going to read the wrong book.”

Another area of concern is the difficulty of maintaining three stove-piped bibliographic areas, from MARC records for books, to serials holdings for link resolvers, to an A-Z list of journals. With separate print and electronic records, the total cost of bibliographic control is unknown, particularly with a lifecycle that includes selection, access, digitization, and storage or deaccession.

There is a real question about inventory control vs. bibliographic control, Lugg said. The opportunity cost of the current processes lead to questions if libraries are putting their effort where it yields the most benefit. With many new responsibilities coming down the pike for technical services, including special collections, rare books, finding aids, and institutional repositories, libraries are challenged to retrain catalogers to expand their roles beyond MARC into learning new formats like MODS, METS, and Dublin Core.

Lugg said R2 found that non-MLS catalogers were often more rule-bound than professional staff, which brings about training questions. He summarized his presentation by asking:

  1. How do we reduce our efforts and redirect our focus?
  2. How can we redirect our expertise to new metadata schemes?
  3. How can we open our systems and cultures to external support from authors, publishers, abstract and indexing (A&I) services, etc?

The role of the consortium
Lizanne Payne, director of the WRLC, a library consortia for DC universities, said that with 200 international library consortia dedicated to containing the cost of content, the economics of bibliographic data is paramount. Saying that shared catalogs and systems date from a time when hardware and software was expensive, “IT staff is the most expensive line item now.”

Payne said storage facilities require explicit placement for quick retrieval, not a relative measure like call numbers. She called for algorithms to be written beyond FrBR that dedupe for unique and overlapping copies that go beyond OCLC or LCCN numbers.

Public libraries are special
Mary Catherine Little, director of technical services, Queens Library (NY), gave a fascinating overview of her library system. With 2.2 million items circulated in 2006 in 33 languages, 45,000 visitors per day, and 75,000 titles cataloged last year, Queens is the busiest library in the United States and has 66 branches within “one mile of every resident.”

Little said their ILS plans are evolving, “Heard about Sirsi/Dynix?” With its multilingual and growing collection, Little detailed their process. First, they ask if they are the first library to touch the record. Then, they investigate whether the ILS can function with the record “today, then tomorrow,” and ask if the record can be found from an outside source. The library prefers to get records from online vendors or directly from the publishers, and has 90 percent of English records in the catalog prior to publication.

Queens Public Library has devised a model for international providers which revolve around receiving monthly lists of high-demand titles, especially from high demand Chinese publishers, and standing orders. With vendors feeling the push from the library, many then enter into partnerships with OCLC.

“Uncataloged collections are better than backlogs,” Little said, and many patrons discover high-demand titles by walking around, especially audio and video. “We’ve accepted the tradeoffs,” she said.

Little made a call for community tagging, word clouds, and open source and extensible catalogs, and said there is a continuing challenge to capture non-Roman data formats.

“Global underpinnings are the key to the future, and Unicode must be present,” Little said, “The Library of Congress has been behind, and the future is open source software and interoperability through cooperation.”

Special libraries harmonize
Susan Fifer Canby, National Geographic Society vice president of library and information services, said her library contains proprietary data and works to harmonzie taxonomies across various content management systems (CMS), enhancing with useful metadata to give her users a Google-like search.

Canby said this work has led to a relative consistency and accuracy, which helps users bridge print and electronic sources. Though some special libraries are still managing print collections, most are devoting serious amounts of time to digital finding aids, competitive information gathering, and future analysis for their companies to help connect the dots. The library is working to associate latitude and longitude information with content to aid with mashups.

The National Geographic library uses OCLC records for books and serials, and simple MARC records for maps, and more complex records for ephemera, “though [there’s] no staff to catalog everything.” The big challenge, however, is cataloging photographs, since the ratio used to be 100 shots for every published photo, and now it’s 1000 to 1.”Photographers have been incentivized to provide keywords and metadata,” Canby said. With the rise of IPTC embedded data, the photographers are adding terms from drop-down menus, free-text fields, and conceptual terms.

The library is buying digital content, but not yet HD content, since it’s too expensive due to its large file size. Selling large versions of its photos through ecommerce has given the library additional funds for special librarians to do better, Canby said.

Special libraries have challenges to get their organizations to implement digital document solutions, since most people use email as a filing strategy instead of metadata-based solutions. Another large challenge is that most companies view special libraries as a cost center, and just sustaining services is difficult. Since the special library’s primary role isn’t cataloging, which is outsourced and often assigned to interns, the bottom line is to develop a flexible metadata strategy that includes collaborating with the Library of Congress and users to make it happen.

Vendors and records
Bob Nardini, Coutts Information Services, said book vendors are a major provider of MARC records, and may employ as many catalogers as the Library of Congress does. Coutts relies on the LC CIP records, and said both publishers and LC are under pressure to do more with less. Nardini advocated doing more in the early stages of a book’s life, and gave an interesting statistic about the commodity status of a MARC record from the Library of Congress: With an annual subscription to the LC records, the effective cost is $.06 per record.

PCC
Mechael Charbonneau, director of technical services at Indiana University Libraries, gave some history about how cataloging was under threat in 1996 because of budget crunches. In part, the Program for Cooperative Cataloging (PCC) came about to extend collaboration and to find cost savings. Charbonneau said that PCC records are considered to be equivalent to LC records, “trustworthy and authoritative.” With four main areas, including BIBCO for bibliographic records, NACO for name authority,  SACO for subject authority, and CONSER for serial records, international participants have effectively supplemented the Library of Congress records.

PCC’s strategic goals include looking at new models for non-MARC metadata, being proactive rather than reactive, reacting with flexibility, achieving close working relationships with publishers, and internationalizing authority files, which has begun with LC, OCLC, and the Deutsche Bibliotek.

Charbonneau said in her role as a academic librarian, she sees the need to optimize the allocation of staff in large research libraries and to free up catalogers to do new things, starting with user needs.

Abstract and indexing
Linda Beebe, senior director of PsycINFO, said the American Psychological Association (APA) has similar goals with its database, including the creation of unique metadata and controlled vocabularies. Beebe sees linking tools as a way to give users access to content. Though Google gives users breadth, not precision, partnerships to link to content using CrossRef’s DOI service has started to solve the appropriate copy problem. Though “some access is better than none,” she cautioned that in STM, a little knowledge is a dangerous thing.

Beebe said there is a continuing need for standards, but “how many, and can they be simplified and integrated?” With a dual audience of librarians and end-users, A&I providers feel the need to make the search learning curve gentle while preserving the need for advanced features that may require instruction.

A robust discussion ensued about the need for authority control for authors in A&I services. NISO emerging standards and the Scopus author profile were discussed as possible solutions. The NISO/ISO standard is being eagerly adopted by publishers as a way to pay out royalties.

Microsoft of the library world?
Karen Calhoun, OCLC VP for WorldCat and Metadata Services, listed seven economic challenges for the working group, including productivity, redundancy, value, scale, budgets, demography, and collaboration. Pointing to Fred Kilgour, OCLC founder, as leading libraries into an age of “enhanced productivity in cataloging,” Calhoun said new models of acquisition is the next frontier.

With various definitions of quality from libraries and end users, libraries must broaden their scale of bibliographic control for more materials. Calhoun argued that “narrowing our scope is premature.” With intense budget pressure “not being surprising,” new challenges include retirements building full strength starting in 2010.

Since libraries cannot work alone, and cost reductions are not ends in themselves, OCLC can create new opportunities for libraries. Calhoun compared the OCLC suite of services to the electric grid, and said remixable and reusable metadata is the way of the future, coming from publishers, vendors, authors, reviewers, readers, and selectors.

“WorldCat is an unexploited resource, and OCLC can help libraries by moving selected technical services to the network,” Calhoun said. Advocating moving library services to the OCLC bill “like PayPal,” Calhoun said libraries could reduce its manpower costs.

Teri Frick, technical services librarian at the Orange County Public Library (VA), questioned Calhoun, saying her library can’t afford OCLC, Calhoun admitted ” OCLC is struggling with that,” and “I don’t think we have the answers.”

Frick pointed out that her small public library has the same needs as the largest library, and said any change to LC cataloging policy would have a large effect on her operations in southwestern Virginia. “When LC cut–and I understand why–it really hurt.”

Library of Congress reorganizes
Beacher Wiggins, Library of Congress director for acquisitions and bibliographic control, read a paper that gave the LC perspective. Wiggins cited Marcum’s 2005 paper that disclosed the costs of cataloging at $44 million per year. LC has 400 cataloging staff (down from 750 in 1991), who cataloged 350,000 volumes last year.

The library has reorganized acquisitions and cataloging into one administrative unit in 2004, but cataloger workflows will be merged in 2008, with retraining to take place over the next 12-36 months. New job descriptions will be created, and new partners for international records (excluding authority records) are being selected. After an imbroglio about redistribution of Italian book dealer Casalini records, Wiggins said, “For this and any future agreements, we will not agree to restrict redistribution of records we receive.”

In further questioning, Karen Coyle, library consultant, pointed out that the education effort would be large, as well as the need to retrain people. Wiggins said LC is not giving up on pre-coordination, which had been questioned by LC union member Thomas Mann and others, but that they are looking at streamlining how it is done.

Judith Cannon, Library of Congress instruction specialist, said “We don’t use the products we create, and I think there’s a disconnect there. These are all interrelated subjects.”

NLM questions business model
Dianne McCutcheon, chief of technical services at the National Library of Medicine, agreed that cataloging is a public good and that managers need to come up with an efficient cost/benefit ratio. However, McCutcheon said, “No additional benefit accrues to libraries for contributing unique records–OCLC should pay libraries for each use of a unique record.”

McCutcheon spoke in favor of incorporating ONIX from publishers in place or to supplement MARC, and “to develop the appropriate crosswalks.” With publishers working in electronic environments, libraries should use the available metadata to enhance records and build in further automation. Since medical publishers are submitting citation records directly to NLM for inclusion in Medline, the library is seeing a significant cost savings, from $10 down to $1 a record. The NLM’s Medical Text Indexer (MTI) is another useful tool, which assits catalogers in assigning subject headings, with 60 percent agreement.

NAL urges more collaboration
Christopher Cole, associate director of technical services at the National Agricultural Library (NAL), said like the NLM, the NAL is both a library and a A&I provider. By using publisher supplied metadata as a starting point and adding additional access points and doing quality control, “quality has not suffered one bit.” Cole said the NAL thesaurus was recreated 6-7 years ago after previously relying on FAO and CAB information, and he advocated for a similar reinvention. Cole said, “Use ONIX. The publishers supply it.”

Tagging and privacy
Dan Chudnov, Library of Congresss Office of Strategic Initiatives, made two points, first saying that social tagging is hard, and its value is an emergent phenomenon with no obvious rhyme or reason. Chudnov said it happens in context, and referenced Tim Spalding’s talk given at LC. “The user becomes an access point, and this is incompatible with the ALA Bill of Rights on privacy that we hold dear,” he said.

Finally, Chudnov advocated for the inclusion of computer scientists from the wider community, perhaps in a joint meeting joined by vendors.

Summing up
Robert Wolven, Columbia University director of library systems and bibliographic control and working group member, summarized the meeting by saying that the purpose was to find the “cost sinks” and to find “collective efficiencies,” since metadata has a long life cycle. Cautioning that there are “no free rides,” libraries must find ways to recoup its costs.

Marcum cited LC’s mission, which is “to make the world’s creativity and the world’s knowledge accessible to Congress and the American people,” and said the LC’s leadership role can’t be diminished. With 100 million hidden items (including photos, videos, etc), curators are called upon in 21 reading rooms to direct users to hidden treasures. But “in the era of the Web, user expectations are expanding but funding is not. Thus, things need to be done differently, and we will be measuring success as never before,” Marcum said.

ALA 2007: Online Books, Copyright, and User Preferences

Ben Bunnell, Google library partnership manager, and Cliff Guren, Microsoft director of publisher evangelism, presented their view of the future to reference publishers June 22 during ALA at the Independent Reference Publishers Group meeting.

Google moves into reference
Bunnell said it was his first time presenting to publishers instead of librarians, and he gave a brief overview of the Google Books program. It has now digitized one million of 65 million books worldwide, and has added Spanish language books to its collections via partnerships with the University of Texas Austin and the University of Madrid. Google is finding that librarians have been using Book Search for acquisitions, which is a somewhat unexpected use.

Microsoft innovates behind
Cliff Guren said Microsoft’s goal is to turn web search into information search. “The reality is that 5 percent of the world’s information is digitized, less than 1 percent of the National Archives and less than 5 percent of the Library of Congress.”

Guren described new initiatives within Live Search, first launched in April 2006, including a partnership with Ingram to store copies of digitized texts, and agreements with CrossRef, Highwire, Eric, and JSTOR for metadata, and Books in Print data. Live Academic Search currently has 40 million articles from 30,000 journals, and includes books from “out of copyright content only.” Library partners include the University of California, the University of Toronto, Cornell University, the New York Public Library, and the British Library. Technology partners include Kirtas Technologies and the Internet Archive, recently declared a library in its own right by the State of California.New features in Live Book Search include options for publishers to retain control, including displaying percent viewable, image blocking, pages forward and back, and a page range exclusion modifier which also shows the user the number of pages alloted. The most unique feature shown was a view of the book page with a highlighted snippet.

Libraries negotiate collaboratively
Mark Sandler, director of CIC library initiatives, followed the sales presentations with some “inconvenient truths.” Sandler said library print legacy collections are deteriorating, some content has been lost in research libraries, and that “users prefer electronic access.”Stating the obvious, Sandler said “we can’t sustain hybridity,” referring to overlapping print and electronic collection building. More controversially, he made the claim that “Maybe we’re not in the book business after all.”Sandler said books take many shapes in libraries, including ebooks, database content, audiobooks, and that pricing models have shifted to include aggregate collections and “by the drink.”With legacy collections digitized, including the American Memory Project, Making of America, Documenting the American South, Valley of the Shadow, and Wright’s American Fiction, libraries had an early start with these types of projects. But with Google’s mission of organizing all the world’s information and making it universally accessible, Sandler claimed libraries are at the point of no return vis a vis change.With library partnerships with not only Google and Microsoft, but also Amazon, the Million Book Project (MBP), and new royalty arrangements, Sandler said there’s a world of new work for libraries to do, including using digitized texts to make transformative works with math, chemical equations, and music to archive, integrate and aggregate content.

Millenials
Lynn Silipigni Connaway, OCLC Research, and Marie Radford, Rutgers University associate professor, described their IMLS-funded grant on millenials’ research patterns. Using a somewhat ill-conceived reproduction of a chat reference interaction gone awry, Connaway and Radford talked about “screenagers” and described user frustration with current reference tools.”Libraries need to build query share,” Connaway said. Their research intends to study non-users, as well as experiential users and learners. One of the initial issues is since students have been taught to guard privacy online, librarians can be viewed as “psychos and internet stalkers” when they enter online environments like Facebook and MySpace.

What’s in it for us?
Reference publishers asked Google and Microsoft representatives, “What’s in it for us to collaborate with you?”

Cliff Guren said, “If I were in your business, I would be scared–your real competition is Wikipedia.” Bunnell deflected the question, saying “librarians use Google Book Search” and advised publishers to “try a few books and see what happens.” Bunnell said he had been surprised to see thesaurus content and other reference books added by publishers, as he had thought they would be outside the scope. “Yet Merriam-Webster added their synonyms dictionary, and they seem to be pleased.”Guerin said,”We think we’re adding value for independent publishers,” but “if there are 400 reference works on the history of jazz, perhaps there will only be 5 or 10 needed in the future because of the inefficiencies of the print system.” Bunnell countered this point with an example, saying, “Cambridge University Press is using Google Book stats to determine what backlist books to bring back into print.”John Dove, Credo CEO (formerly xRefer), spoke about the real difference between facts and knowledge, and that “facts should be open to all.” Connaway said OCLC is finding that WorldCat.org referral traffic stats show 50 percent of users come from Google Book Search, 40 percent from Libraries, and 9 percent from blogs and wikis.

Future of print?
Gale Reference said they are seeing declining profits from print reference, and asked,”What’s the life of a reference book? Does it have 5 or 10 years left?” Radford answered by saying “I think the paper reference book will be disappearing.” She said all New Jersey universities will share reference collections because of lack of space and funds. Guren was more encouraging, saying “There’s still a need for what you [reference publishers] do. Reference information is needed, though perhaps a reference book is not.”

ALA 2007: Top Tech Trends

At the ALA Top Tech Trends Panel, panelists including Marshall Breeding, Roy Tennant, Karen Coombs, and John Blyberg discussed RFID, open source adoption in libraries, and the importance of privacy.

Marshall Breeding, director for innovative technologies and research at Vanderbilt University Libraries (TN), started the Top Tech Trends panel by referencing his LJ Automation Marketplace article, “An Industry Redefined,” which predicted unprecedented disruption the ILS market. Breeding said 60 percent of the libraries in one state are facing a migration due to the Sirsi/Dynix product roadmap being changed, but he said not all ILS companies are the same.

Breeding said open source is new to the ILS world as a product, even though it’s been used as infrastructure in libraries for many years. Interest has now expanded to the decision makers. The Evergreen PINES project in Georgia, with 55 of 58 counties participating, was mostly successful. With the recent decision to adopt Evergreen in British Columbia, there is movement to open source solutions, though Breeding cautioned it is still miniscule compared to most libraries.

Questioning the switch being compared to an avalanche, Breeding said several commercial support companies have sprung up to serve the open source ILS market, including Liblime, Equinox, and CARe Affiliates. Breeding predicted an era of new decoupled interfaces.

John Blyberg, head of technology and digital initiatives at Darien Public Library (CT), said the back end [in the ILS] needs to be shored up because it has a ripple effect on other services. Blyberg said RFID is coming, and it makes sense for use in sorting and book storage, echoing Lori Ayres point that libraries need to support the distribution demands of the Long Tail. Feeling that privacy concerns are non-starters, because RFID is essentially a barcode, he said the RFID information is stored in a database, which should be the focus of security concerns.

Finally, Blyberg said that vendor interoperability and a democratic approach to development is needed in the age of Innovative’s Encore and Ex Libris’ Primo, both which can be used with different ILS systems and can decouple the public catalog from the ILS. With the xTensible catalog (xC) and Evergreen coming along, Blyberg said there was a need for funding and partners to further enhance their development.

Walt Crawford of OCLC/RLG said the problem with RFID is the potential of having patron barcodes chipped, which could lead to the erosion of patron privacy. Intruders could datamine who’s reading what, which Crawford said is a serious issue.

Joan Frye Williams countered that both Blyberg and Crawford were insisting on using logic on what is essentially a political problem. Breeding agreed, saying that airport security could scan chips, and that my concern is that third generation RFID chips may not be readable in 30 years, much less the hundreds of years that we expect barcodes to be around for.

Karen Coombs, head of web services at the University of Houston (TX), listed three trends:
1. The end user as content contributor, which she cautioned was an issue. What happens if YouTube goes under and people lose their memories? Combs pointed to the project with the National Library of Australia and its partnership with Flickr as a positive development.
2. Digital as format of choice for users, pointing out iTunes for music and Joost for video. Coombs said there is currently no way for libraries to provide this to users, especially in public libraries. Though companies like Overdrive and Recorded Books exist to serve this need, perhaps her point was that the consumer adoption has superseded current library demand.
3. A blurred line between desktop and web applications, which Coombs demonstrated with YouTube remixer and Google Gears, which lets you read your feeds when you’re offline.

John Blyberg responded to these trends, saying that he sees academic libraries pursuing semantic web technologies, including developing ontologies. Coombs disagreed with this assessment, saying that libraries have lots of badly-tagged HTML pages. Tennant agreed, saying If the semantic web arrives, buy yourself some ice skates, because hell will have frozen over.

Breeding said that he longs for SOA [services-oriented architecture] but I’m not holding my breath. And Walt Crawford said, Roy is right—most content providers don’t provide enough detail, and they make easy things complicated and don’t tackle the hard things. Coombs pointed out, People are too concerned with what things look like, but Crawford interjected, not too concerned.

Roy Tennant, OCLC senior program manager, listed his trends:
1. Demise of the catalog, which should push the OPAC into the back room where it belongs and elevate discovery tools like Primo and Encore, as well as OCLC WorldCat Local.
2. Software as a Service (SaaS), formerly known as ASP and hosted services, which means librarians don’t have to babysit machines, and is a great thing for lots of librarians.
3. Intense marketplace uncertainty due to the private equity buyouts of ExLibris and SirsiDynix and the rise of Evergreen and Koha looming options. Tennant also said he sees WorldCat Local as a disruptive influence. Aside from the ILS, the abstract and indexing (A&I) services are being disintermediated as Google and OCLC are going direct to publishers to license content.
Someone asked if libraries should get rid of local catalogs, and Tennant said, only when it fits local needs.

Walt Crawford said:
1. Privacy still matters. Crawford questioned if patrons really wanted libraries to turn into Amazon in an era of government data mining and inferences which could track a ten year patron borrowing pattern.
2. The slow library movement, which argues that locality is vital to libraries, mindfulness matters, and open source software should be used where it works.
3. The role of the public library as publisher. Crawford pointed out libraries in Charlotte-Mecklenberg County, libraries in Vermont that Jessamyn West works with, and Wyoming as farther along this path, and said the tools are good enough that it’s becoming practical.

Blyberg said that systems need to be more open to the data that we put in there. Williams said that content must be disaggregatable and remixable, and Coombs pointed out the current difficulty of swapping out ILS modules, and said ERM was a huge issue. Tennant referenced the Talis platform, and said one of Evergreen’s innovations is its use of the XMPP (Jabber) protocol, which is easier than SOAP web services, which are too heavyweight.

Marshall Breeding responded to a question asking if MARC was dead, saying I’m married to a cataloger, but we do need things in addition to MARC, which is good for books, like Dublin Core and ONIX. Coombs pointed out that MARCXML is a mess because it’s retrofitted and doesn’t leverage the power of XML. Crawford said, II like to give Roy [Tennant] a hard time about his phrase MARC is dead, and for a dying format, the Moen panel was full at 8 a.m.

Questioners asked what happens when the one server goes down, and Blyberg responded, What if your T-1 line goes down? Joan Frye Williams exhorted the audience to examine your consciences when you ask vendors how to spend their time. Coombs agreed, saying that her experience on user groups had exposed her to crazy competing needs that vendors are faced with, [they] are spread way too thin. Williams said there are natural transition points and she spoke darkly of a pyramid scheme and that you get the vendors you deserve. Coombs agreed, saying, Feature creep and managing expectations is a fiercely difficult job, and open source developers and support staff are different people.

Joan Frye Williams, information technology consultant, listed:
1. New menu of end-user focused technologies. Williams said she worked in libraries when the typewriter was replaced by an OCLC machine, and libraries are still not using technology strategically. Technology is not a checklist, Williams chided, saying that the 23 Things movement of teaching new skills to library staff was insufficient.
2. Ability for libraries to assume development responsibility in concert with end-users
3. Have to make things more convenient, adopting (AI) artificial intelligence principles of self-organizing systems. Williams said, If computers can learn from their mistakes, why can’t we?

Someone asked why libraries are still using the ILS. Coombs said it’s a financial issue, and Breeding responded sharply, saying, How can we not automate our libraries? Walt Crawford agreed, saying, Are we going to return to index cards?

When the panel was asked if library home pages would disappear, Crawford and Blyberg both said they would be surprised. Williams said the product of the [library] website is the user experience. She said Yorba Linda Public Library (CA) is enhancing their site with a live book feed that updates as books are checked in, a feed scrolls on the site.

And another audience member asked why the panel didn’t cover toys and protocols. Crawford said outcomes matter, and Coombs agreed, saying I’m a toy geek but it’s the user that matters. Many participants talked about their use of Twitter, and Coombs said portable applications on a USB drive have the potential to change public computing in libraries. Tennant recommended viewing the Photosynth demo, first shown at the TED conference.

Finally, when asked how to keep up with trends, especially for new systems librarians, Coombs said, It depends what kind of library you’re working in. Find a network and ask questions on the code4lib [IRC] channel.

Blyberg recommended constructing a well-rounded blogroll that includes sites from the humanities, sciences, and library and information science will help you be a well-rounded feed reader. Tennant recommended a gasp dead tree magazine, Business 2.0. Coombs said the Gartner website has good information about technology adoptions, and Williams recommended trendwatch.com.

Links to other trends:
Karen Coombs Top Technology Trends
Meredith Farkas Top Technology Trends
3 Trends and a Baby (Jeremy Frumkin)
Some Trends from the LiB (Sarah Hougton-Jan)
Sum Tech Trends for the Summer of 2007 (Eric Lease Morgan)

And other writeups and podcast:
Rob Styles
Ellen Ward
Chad Haefele

Presenting at ALA panel on Future of Information Retrieval

The Future of Information Retrieval

Ron Miller, Director of Product Management, HW Wilson, hosts a panel of industry leaders including:
Mike Buschman, Program Manager, Windows Live Academic, Microsoft.
R. David Lankes, PhD, Director of the Information Institute of Syracuse, and Associate Professor, School of Information Studies, Syracuse University.
Marydee Ojala, Editor, ONLINE, and contributing feature and news writer to Information Today, Searcher, EContent, Computers in Libraries, among other publications.
Jay Datema, Technology Editor, Library Journal

Add to calendar:
Monday, 25 June 2007
8-10 a.m, Room 103b
Preliminary slides and audio attached.

Open Data: What Would Kilgour Think?

The New York Public Library has reached a settlement with iBiblio, the public’s library and digital archive at the University of Chapel Hill, North Carolina, for harvesting records from its Research Libraries catalog, which it claims is copyrighted.

Heike Kordish, director of the NYPL Humanities Library, said a cease and desist letter was sent because a 1980s incident by an Australian harvesting effort which turned around and resold the NYPL records.

Simon Spero, iBiblio employee and technical assistant to the assistant vice chancellor at UNC-Chapel Hill, said NYPL requested that its library records be destroyed, and the claim was settled with no admission of wrongdoing. “I would characterize the New York Public Library as being neither public nor a library,” Spero said.

It is a curious development that while the NYPL is making arrangements under private agreements to allow Google to scan its book collection into full-text that it feels free to threaten other research libraries over MARC records.

The price of open data
This follows a similar string of disagreements about open data with OCLC and the MIT Simile project. The Barton Engineering Library catalog records were widely made available via Bit Torrent, a decentralized network file sharing format.

This has since been resolved by making the Barton data available again, though in RDF and MODS, not MARC, under a Creative Commons license for non-commercial use.

OCLC CEO Jay Jordan said the issues around sharing data had their genesis in concerns about the Open WorldCat project and sharing records with Microsoft, Google, and Amazon. Other concerns about private equity firms entering the library market also drove recent revisions to the data sharing policies.

OCLC quietly revised its policy about sharing records, which had not been updated since 1987 after numerous debates in the 1980s about the legality of copyrighting member records.

The new WorldCat policy, reads in part, “WorldCat® records, metadata and holdings information (“Data”) may only be used by Users (defined as individuals accessing WorldCat via OCLC partner Web interfaces) solely for the personal, non-commercial purpose of assisting such Users with locating an item in a library of the User’s choosing… No part of any Data provided in any form by WorldCat may be used, disclosed, reproduced, transferred or transmitted in any form without the prior written consent of OCLC except as expressly permitted hereunder.”

Looking through the most recent board minutes, it looks like concerns have been raised about “the risk to FirstSearch revenues from OpenWorldCat,” and management incentive plans have been approved.

What is good for libraries?
Another project initiated by Simon Spero, entitled Fred 2.0 after recently deceased Fred Kilgour of OCLC, Yale, and Chapel Hill fame, recently released Library of Congress authority file and subject information, which was gathered by similar means as the NYPL records.

Spero said the purpose of the project is “dedicated to the men and women at the Library of Congress and outside, who have worked for the past 108 years to build these authorities, often in the face of technology seemingly designed to make the task as difficult as possible.

Since Library of Congress data by definition cannot be copyrighted as free government information, the project was more collaborative in nature and has received acclaim for its help in pointing out cataloging irregularities in the records. OCLC also offers a linked authority file as a research project.

Firefox was born from open source
While the purpose of releasing library data has not yet reached consensus about what will be built as a result, it can be compared to Netscape open-sourcing the Mozilla code in 2000, which eventually brought Firefox and other open source projects to light. It also shows that the financial motivations of library organizations by necessity dictate the legal mechanisms of protection.

code4lib 2007

Working Code Wins
Responding to increasing consolidation in the ILS market, library developers demonstrated alternatives and supplements to library software at the second annual code4lib conference in Athens, GA, February 27-March 2, 2007. With 140 registered attendees from many states and several countries, including Canada and the United Kingdom, the conference was a hot destination for a previously isolated group of developers.

Network connectivity was a challenge for the Georgia Center for Continuing Education, but the hyperconnected group kept things interesting and the attendees coordinated by Roy Tennant artfully architected workarounds and improvements as the conference progressed.

In a nice mixture of emerging conference trends, code4lib combined the flexibility of the unconference with 20 minute prepared talks, keynotes, five minute Lightning Talks, and breakout sessions. The form was derived from Access, the Canadian library conference.

Keynotes
The conference opened with a talk from Karen Schneider, associate director for technology and research at Florida State University. She challenged the attendees to sell open source software to directors in terms of solutions it provides, since the larger issue in libraries is saving digital information. Schneider also debated Ben Ostrowsky, systems librarian at the Tampa Bay Library Consortium, about the importance of open source software from the stage, to which Ostrowsky responded, “Isn’t that Firefox [a popular open source browser] you’re using there?”

Erik Hatcher, author of Lucene in Action, gave a keynote about using the full-text search server, Apache Solr, open-source search engine Lucene and faceted browser, Flare, to construct a new front-end to library catalog data. The previous day, Hatcher led a free preconference for 80 librarians who brought exported MARC records, including Villanova University and the University of Virginia.

Buzz
One of the best-received talks revolved around BibApp, an “institutional bibliography” written in Ruby on Rails by Nate Vack and Eric Larson, two librarians at the University of Wisconsin-Madison. The prototype application is available for download, but currently relies on citation data from engineering databases to construct a profile of popular journals, publishers, citation types, and who researchers are publishing with. “This is copywrong, which is sometimes what you have to do to construct digital library projects. Then you get money to license it,” Larson said.

More controversially, Luis Salazar gave a talk about using Linux to power public computing in the Howard County (MD) public library system. A former NSA systems administrator, he presented the pros and cons of supporting 300 staff and 400 public access computers using Groovix, a customized Linux distribution. Since the abundant number of computers serves the public without needing sign up sheets, “patrons are able to sit down and do what they want.”

Salazar created a script for monitoring all the public computers, and described how he engaged in a dialog with a patron he dubbed “Hacker Jon,” who used the library computers to develop his nascent scripting skills. Bess Sadler, librarian and metadata services specialist at the University of Virginia, asked about the privacy implications of monitoring patrons. “Do you have a click-through agreement? Privacy Policy?” she asked. Salazar joked that “It’s Maryland, we’re like a communist country” and said he wouldn’t do anything in a public library that he wouldn’t expect to be monitored.

Casey Durfee presented a talk on “Endeca in 250 lines of code or less,” which showed a prototype of faceted searching at the Seattle Public Library. The new catalog front-end sits on top of a Horizon catalog, and uses Python and Solr to present results in an elegant display, from a Google-inspired single search box start to rich subject browse options.

The future
This year’s sponsors included Talis, LibLime, OCLC, Logical Choice Technologies, and Oregon State University. OSU awarded two scholarships to Nicole Engard, Jenkins Law Library (2007 LJ Mover and Shaker), and Joshua Gomez, Getty Research Institute.

Next year’s conference will be held in Portland, OR.

Taiga 2 Forum moves into Open Space

Assistant University Librarians and Assistant Directors met for the second annual Taiga Forum a day before ALA Midwinter, Seattle, to discuss the changing dynamics of academic libraries.

In a change from last year, the participants utilized the Open Spaces structure to stage an unconference, where the conversation topics were chosen by the participants.

Topics included Search, Radical Collaboration, and Google: Friend or Foe, among others. The guiding principles were, “Whoever comes is the right person, whatever happens is the only thing that could have happened, whenever it starts is the right time, and when it’s over, it’s over.” The Endangered Species conference met in an adjoining conference room.

Meg Bellinger, Yale University Associate University Librarian, said, “We came away with the sense that we don’t have all of the answers but we all share the same problems. We must spend time moving beyond the current issues towards solutions.”

The meeting was sponsored by Innovative Interfaces, Inc.

Open source metasearch

Now there’s a new kid on the (meta)search block. LibraryFind, an open-source project funded by the State Library of Oregon, is currently live at Oregon State University. The library has just packaged up a release for anyone to download and install.

Jeremy Frumkin, Gray chair for Innovative Library Services at OSU, said the goals were to contribute to the support of scholarly workflow, remove barriers between the library and Web information, and to establish the digital library as platform.

Lead developers Dan Chudnov, soon to join the Library of Congress’s Office of Strategic Initiatives, and Terry Reese, catalog librarian and developer of popular application MarcEdit, worked with the following guiding principles: Two clicks–one to find, and one to get; a goal of getting results in four seconds, and known and adjustable results ranking.

Other OSU project members included Tami Herlocker, point person for interface development, and Ryan Ordway, system administrator. Frumkin said, “The Ruby on Rails platform provided easy, quick user interface development. It gives a variety of UI possibilities, and offers new interfaces for different user groups.”

The application includes collaborations on the OpenURL module from Ross Singer, library applications developer at the Georgia Tech library, and Ed Summers, Library of Congress developer. Journal coverage can be imported from a SerialsSolutions export, and more import facilities are planned in upcoming releases.

OSU is working on a contract with OCLC’s WorldCat to download data, and is looking to build greater trust relationships with vendors. “The upside for vendors is they can see how their data is used when developing new services,” Frumkin said.

Future enhancements include an information dashboard and a personal digital library. Developers are also staffing a support chatroom for technical support, help, and development discussion of LibraryFind.