Error message

User warning: The following module has moved within the file system: mimemail. In order to fix this, clear caches or put the module back in its original location. For more information, see the documentation page. in _drupal_trigger_error_with_delayed_logging() (line 1143 of /var/www/vhosts/fbd/public/drupal7/includes/bootstrap.inc).

BOBCATSSS 2017. LIS students have the power!

BOBCATSSS 2017Hello,

Last week, 25-27th February 2017, BOBCATSSS 2017 was held at the University of Tampere, Finland. As every year, LIS students from several European countries (and from USA) shared their knowledge and experiences in conferences and workshops.

The conference began with an opening ceremony where the Moomins were the peak of several performances. Next, Carol Tenopir, professor at the University of Tennessee, Knoxville, presented “Researches need information tool: how information improves the quality of workfile” about reading habits and values in Finland. After the lunchtime, the conference was completed with paper sessions and workshops, some of them very interesting. The social program of this first day offered to attendants a very wide range of centers to visit. I chose the Lenin Muesum, the first museum dedicated to Lenin outside the Soviet Union that remember the meeting that Lenin and Stalin held in Tampere to organize the Soviet Revolution 1917. Very interesting…

The second day was Guus van den Brekel (University Medical Centre Groningen) who opened the conference with his speech “Interactive media: about Information and libraries” about challenges that libraries have to face in a more and more interactive media world. In addition to papers, workshops, meetings, Thursday was the day of the Evening Party! Great music and dance floor at Kubli Club.

The last day was the turn of Josie Billington, from the University of Liverpool, who showed us the power of the Shared Reading to influence mental health and wellbeing. During the closing ceremony, BOBCATSSS 2017 organizers gave the flag to BOBCATSSS 2018 organizers, University of Latvia and Eötvös University of Budapest, Hungary. Then, see you to next year in Riga, Latvia!

Enjoy it!

Andreu Sulé

University of Barcelona

The tsunami is coming to the ICA!

International Council on ArchivesHello,

It seems that the tsunami of descriptive principles refounding is coming early also in the world of archives, or at least this is the sign that the International Council on Archives (ICA) is showing with the review of the current principles and standards of the archival description that it is being carried out today.

Indeed, the ICA created in 2012 an Expert Group on Archival Description (EGAD) with the aim of developing a comprehensive descriptive standard that “reconciles, integrates, and builds on” the four existing standards: ISAD(G), ISAAR(CPF), ISDF and ISDIAH.

After four years of work, the Expert Group brought to light for public comment the initial draft of the first part of a two-part standard for archival description named Records in Contexts (RiC). The first part includes the conceptual model (RiC-CM), while the second contains the ontology model (RIC-O), in a clear orientation to the semantic Web.Example of Archival Description Conforming to RiC - CM

In the RiC-CM draft there is identified and defined the primary descriptive entities (fourteen entities), the properties or attributes of these entities, and essential relations among them. Surprisingly, the standard does not include any E-R representation of the conceptual model proposed and we have to resign ourselves with the representation of an archival description example (p. 93).
 

As you can see, the RiC-CM goes so far away from the classic hierarchical model of the ISAD(G) (p. 36), and it proposes a description in form of a graph or network. Thus, the new proposal seeks to respond not only to the archival community but also to the records management community, whose specific needs often does not fit with the archival principles of Respect des fonds and the Respect for original order.

One more time... enjoy it!

Andreu Sulé

University of Barcelona

 

Is RDA really necessary?

Is RDA really necessary?Hello,

“The rules called Resource Description & Access (RDA) are an expensive answer to a non-existent problem.” These are not my own words but Michael Gorman, the reputed British-born librarian, in his last article RDA: the Emperor’s New Code published in JLIS.it (Vol. 7, No. 2, 2016).

It’s known the Gorman’s animosity towards RDA even since their first drafts. In RDA: the coming cataloguing debacle Gorman already showed himself “horrified” by them.

Someone can guess that this hostility has some relationship with his strong link with the Anglo-American cataloguing rules, the cataloging rules that RDA is called to substitute (Gorman was the first editor of AACR2 second edition at 1978 and 1988), but if we read carefully RDA: the Emperor’s New Code we can find serious and well-argued arguments.

His central idea is that ACCR2 is a cataloguing rule that “could accommodate useful change; and was perfectly adequate to the realities of modern cataloguing”. To demonstrate it, Gorman refers to the MRIs: AACR2 Rule Interpretations, a draft of unofficial AACR2 that incorporates all the changes produced in RDA.

After that, Gorman exemplifies the senseless of the cataloguing code change in three items: the “inexplicable and irresponsible” abandon of the ISBD, “the most successful bibliographic standard in history”; the right decision to abandon abbreviations in catalogue records, but that “could easily have been accommodated within AACR2”; and, by last, the fact that RDA “makes both cataloguing and catalogue use more confusing” because of the amount of “errors, confusions, misleading examples, and unclear wordings” that the code includes.

All in all, you can agree or not Gorman’s arguments, but it’s difficult to be indifferent about his opinions.

Enjoy it!

Andreu Sulé

University of Barcelona

Linked Data for Libraries (LD4L)

Linked Data for Libraries (LD4L)Hello,

Linked Data for Libraries (LD4L) is a collaboration project of the Cornell University Library, the Harvard Library Innovation Lab, and the Stanford University Libraries, that began at 2014 with a two-year Andrew W. Mellon Foundation’s grant of $1 million dollar and this year has concluded its first research phase.

The news is that the Andrew W. Mellon Foundation has renewed this grant with $1.5 million dollar (2016-2018) and has extended it to two other projects: the Linked Data for Libraries Labs (LD4L Labs), headed by Cornell, and the Linked Data for Production (LD4P), headed by Stanford. In this way, there is a LD4L Gateway that includes the three projects.

The goal of the former LD4L project was to create triple stores called Scholarly Resource Semantic Information Store (SRSIS) model based on BIBFRAME, to link bibliographic data (MARC21 transformation), person data, and usage data, and to connect library resources with institutional and other data on the web.

On its behalf, the LD4L Labs project is focused on developing and support “tools for linked data creation and editing, the bulk conversion of existing metadata to linked data, and a common system to support initial work in entity resolution and reconciliation”.

Finally, the goal of the LD4P is to begin the transition of technical services production workflows to ones based in Linked Open Data (LOD). This first phase of the transition will focus on the development of the ability to produce metadata as LOD communally, the extension of the BIBFRAME ontology to encompass the many resource formats that libraries must process, and the engagement of the broader library community to ensure a sustainable and extensible environment.

As you see, three leading projects on the library Linked Data arena!

Enjoy it!

Andreu Sulé

University of Barcelona

Finally, library catalogs on the Web!(?)

Libhub InitiativeHello,

In June 2014, at the American Library Association Conference in Las Vegas, Zepheira announced the Libhub Initiative, a project that aims to raise the visibility of libraries on the Web by “actively exploring” BIBFRAME and Linked Data. Its strategy is based on automatically exportation and conversion of library MARC21 records into BIBFRAME, and its transformation into Linked Data. Finally, publishing on the Web the transformed and connected content.

Nowadays, the Libhub Initiative is in an experimental phase: hearing from the broader community, gauging interest and willingness to get involved, and recruiting Active Supporter, Interested Participating Libraries, Partners, and Sponsors. During this experimental phase there are no costs to libraries because the main objective is to create a very big and collaborative database (cloud service) that allows library data to be discoverable and presented at or near the top of Web search engines page results.

It’s interesting to know that Eric Miller, Zepheira’s President, prior to founding Zepheira, led the Semantic Web Initiative for the World Wide Web Consortium (W3C) at MIT, and that in 2011 the Library of Congress contracted with Zepheira to define the way forward for moving library data into the Web. Actually, for the past 2 years, Zepheira has been defining BIBFRAME.

Another interesting information to evaluate the importance of this project is to know that amongst its sponsors and partners we can find EBSCO, Innovative, SirsiDynix, Deutsche Nationalbibliothek and Denver Public Library.

Without a doubt, Libhub Initiative is a project that deserve our special attention.

Enjoy it!

Andreu Sulé

University of Barcelona

Implementation of the RDF data model in digital collections of Spanish libraries, archives and museums

Implementation of the RDF data model in digital collections of Spanish libraries, archives and museumsHello,

We are very proud to announce the publication in the Revista española de Documentación Científica [Spanish Journal of Scientific Documentation] of our last paper Aplicación del modelo de datos RDF en las colecciones digitales de bibliotecas, archivos y museos de España [Implementation of the RDF data model in digital collections of Spanish libraries, archives and museums].

The article discusses how and to what extent the RDF data model is applied in major Spanish digital collections of heritage materials. With this objective, we analysed fifty-one digital repositories to determine whether they expressed their records in RDF, offered SPARQL query points searchable by external agents, and used references as property values.

Our main conclusion is that the use of RDF is unequal and excessively conditioned by the use of applications that automatically convert records into RDF triples. Actually, few of the collections analysed give SPARQL points for external queries.

Another finding is the very scarce use of linked data that connect data of the Spanish repositories with other datasets. In this sense, our recommendation is that these collections should enrich their data and define aggregation levels for generated RDF data in order to be disseminated, made accessible, and adapted to the semantic web.

Actually, nowadays we are researching ways to semi-automatically enrich data of DSpace repositories with external links to other datasets (VIAF, DBpedia, GeoNames, etc.). We’ll keep you updated.

Meanwhile… Enjoy it!

Andreu Sulé

University of Barcelona

Visual Genome or how computers can recognize what happens in an image

Visual GenomeHello,

Automatic representation of images is one of the most challenges of the Computers and Classification sciences. Can computers recognize not just objects but to make sense of what’s actually going on in images?

The ability of to automatically recognize the contents of images is a discipline that is part of a major field called Computer Vision, and deep learning a method by virtue of which machines can learn to analyse and classify images.  This branch of Artificial Intelligence (AI) is based on a “set of algorithms that attempt to model high-level abstractions in data by using multiple processing layers with complex structures, or otherwise composed of multiple non-linear transformations”.

In this research framework is that we have to understand the Visual Genome project. Visual Genome is a dataset of 108,077 images developed by Fei-Fei Li, a professor who specializes in computer vision and who directs the Stanford Artificial Intelligence Lab, together with several colleagues.

The Visual Genome software, as other projects (e.g. Microsoft Common Objects in Context), tries to describe in a human way what happens in an image. In Fei-Fei Li’s words: “You’re sitting in an office, but what’s the layout, who’s the person, what is he doing, what are the objects around, what event is happening?”

The opportunities of this research are enormous, from self-driving cars understanding properly (not just seeing) what’s happen around them to robots that can interact with humans in a better way.

Enjoy it!

Andreu Sulé

University of Barcelona

The European Data Portal

European Data PortalThe European Commission has developed and published the Beta release of the European Data Portal, a portal that harvests Open Data of Public Sector Information available on public data portals across European countries. Information regarding the provision of data and the benefits of re-using data is also included. The strategic objective of the European Data Portal is to improve accessibility and increase the value of Open Data.

In this project, Open Data refers to the information collected, produced or paid for by the public bodies and made freely available for re-use for any purpose. It is important to note that not all of the public sector information is Open Data.

Regarding the public data portals that are harvested, these can be national, regional, local or domain specific. They cover the 28 EU Member States, EEA, countries involved in the EU's neighbourhood policy and Switzerland.

Open Data can be searched by categories, single search box or with SPARQL-Query. Users can browse through 13 categories that follow a revision of the DCAT Application Profile and have been mapped against the Eurovoc Thesaurus.

Results can be filtered by location, providers (catalogs), categories, tags, formats and licenses.

Of each result, users can know the dataset name, a brief description, the several distribution formats, descriptive tags, and an additional information that includes, among others, creation date, last Updated, publisher web site, etc. In the same interface there is a link to provider information and information about the license.

Summarizing, the European Data Portal is a tool to search and find Open (Government) Data, very well organized and very useful to the whole data value chain: from data publishing to data re-use.

Enjoy it!

Andreu Sulé

University of Barcelona

OCLC breaks again the Library walls

OCLCHello,

OCLC has published an agreement with more than 200 publishers and content providers around the world to add metadata for books, e-books, jorunals, databases, etc. in order to facilitate their discovery and user access through WorldCat Discovery Services.

It is estimated that this agreement will provide to OCLC services’ users descriptive metadata for more than 1.9 billion resources, both in physical and electronic format.

With this agreement, OCLC continues to break the barriers of traditional library setting. In addition to the inclusion in bibliographic records of links to external resources (to online bookstores, website reviews as Goodreads, etc.), and the publication of metadata in Schema.org, OCLC will offer a new service which will surely be highly appreciated by its users.

Enjoy it!

Andreu Sulé

University of Barcelona

How to find content of apps with Google Search

Google SearchHello,

It is well known that since two years Google is indexing the content of apps, and therefore, when someone does a search from a mobile device gets results whether they’re in an app and on the web. Google states to have over 100 billion deep links into apps in its index.

But this service had two limitations: first of all, Google Search results only showed the content of apps that have their equivalent in the web; secondlyeach user was just able to find content of apps installed on their smartphone.

"Stream" button

Henceforth, Google Search improves its service and every time users search through this app, or directly in Chrome and Android browsers, will find content that lives only in apps and even from apps they don’t have installed yet! This last service is got by clicking on the "Stream" button. For example, if among results there is the content of the app of a hotel, by clicking on "Stream" button the user can view a streamed version of the app, and even use their services (for example, booking a room).

In this first beta service, Google is providing only the content of the apps from a small group of partners: Hotel Tonight, Chimani Rocky Mountain, Daily Horoscope and New York MTA Subway Map.

Enjoy it!

Andreu Sulé

University of Barcelona

Pages