Friday, December 23, 2011

Amazon EC2 reserved instance - one year later

My first Amazon Web Services EC2 reserved instance, which supports the Naval Reactors History Database online resource, expired today. This is something of a milestone for me. I had used EC2 instances for several months prior to purchasing a reserved instance last December. This reserved instance enables me to run an EC2 instance 24/7/365 at a reduced cost over what I would pay for an on-demand AWS EC2 instance.

Based upon my year's experience, I have these thoughts:
  • First, my goal is to maintain the nrhdb resource online at the lowest possible cost. While it would be possible (though technically clunkier) for me to run the nrhdb on a Windows server, the EC2 prices for Linux versus Windows make it clear that the purchase and use of a Linux EC2 reserved instance is the most cost-effective choice.

  • Second, the Amazon Web Services outage that occurred in April made me take a second look at the way that I manage my instance's data. I posted on this incident earlier this year. I now maintain a copy of my server's volume in another availability zone in the East region at all times and update the snapshot/volume every week or so. Although I could use EC2 API tools to automate this process, I'm still doing it manually using the AWS Management Console.

  • I wound up changing my server OS during the year from Fedora to Amazon Linux (using the Amazon Linux AMI). Overall, my experience with this AMI has been positive - the Amazon Linux instance comes with fewer preinstalled packages and the ongoing installation of updates is very seamless.

  • Finally, EC2 reserved instances are not only specific to an AWS region, but to an availability zone as well. Thus, you're locked into that AZ for the term period. Which can be a problem if, for example, there's an incident like the one that impacted the East region, but was centered in one availability zone.
Overall, I'm glad that I'm using Amazon Web Services EC2 and I believe that I'm getting a good value from AWS as a whole. The EC2 home page notes that EC2 "is designed to make web-scale computing easier for developers" and that is certainly my take on it. I've been able to maintain an online service reliably for a year and the purchase of the reserved instance has made it affordable.

Sunday, September 18, 2011

public AMI that enables rapid evaluation of XTF

At the upcoming LITA National Forum, I will be presenting on my work in blending the California Digital Library's XTF platform and EC2 cloud services. I'm using XTF and Amazon Web Services in support of the Naval Reactors History Database service, an online resource that I started building last year.

As part of my presentation, I've created a public Amazon Machine Image (AMI), ami-51f93b38 (or, just search the AMI catalog for 'xtf'). This is a US East region AMI. I created this image using the Amazon Linux 32-bit AMI as a base, then downloaded and configured the XTF 3.0 release, along with the XTF sample files that show XTF's use with a range of formats, including EAD, TEI, PDF, and HTML.

There is a README file in the ec2-user directory with more information on how to use the instance to test XTF. This info is also available at URL https://s3.amazonaws.com/ami-xtf/xtfAmiInformation.rtf.

In summary, this AMI will enable an institution or individual to quickly get XTF online and to review its features, starting from the release version of XTF and the samples that the CDL has made available.

Additional note: This is the first time that I've worked with the Amazon Linux AMI and I found it to be both easy to use and intuitive. I intend to use this OS option in the future.

Saturday, September 10, 2011

more on community / Naval Reactors History Database


I've continued with efforts to build community around the Naval Reactors History Database. As I think about this project, it has three major components: data, infrastructure, and community. I focused heavily on the infrastructure piece in the last nine months. I summarized my work with XTF and Amazon Web Services EC2 in a Code4Lib Northwest presentation this spring. But, in summary, the open source XTF digital content platform and AWS EC2 have enabled me to create a durable online presence at a low cost.

I've focused a lot on the community component in the last two months, with some positive results. This weekend, I've gotten Facebook commenting online, which was more challenging than I expected. The comments box appears in the footer section, along with the Facebook Like button. It was only in late June that I even thought about adding the Like button to the site, after hearing Eric Hellman's presentation at ALA Annual. I added the Like button to the site in late July and I've already been able to use it, along with some targeted display ads, to drive traffic to the NRHDB site and to learn more about the resource's users. I'm hopeful that the commenting will add another important dimension - enabling a public user dialogue within the site. As I've worked to build community, it's clear to me how important it is, and how difficult it is. I'm still thinking this through, but I do want to seriously engage other users in the Naval Reactors History Database. This may require me to modify the XTF interface to enable end users add comments relevant to specific database objects - images and documents. To start, there will be a unified comment stream for the site as a whole.

Also, I switched from the default Twitter widget to a Twitter Widget created through the vendor WidgetBox, for one narrow reason: an ability to customize to better customize the widget's look-and-feel. I am not a designer and, for that reason, I am using the XTF 3.0 look-and-feel with minimal customizations. The WidgetBox Twitter widget supports squared corners, which are part of the XTF default interface.

The final leg of the project's components, data, is going well; I will comment on this in more detail in a separate post.

Friday, July 22, 2011

starting point - to build NRHDB community


This month, I've made some changes to the Naval Reactors History Database, hoping to build a foundation for linking with the small community of users who have committed time to creating Wikipedia entries and other info in social network spaces:
  • An NRHDB Twitter account, in which I post database-related info; Twitter users can indicate interest in the database by following this user
It will take some time for these changes to have an impact. The Facebook development work really opens up a lot of options for communication - as Eric Hellman of Gluejar pointed out in his ALA Annual presentation, the Like button is the most popular Semantic Web application. Its use establishes a number of methods for me to communicate with other Facebook users who have interest in the NRHDB, including messages and even ads.

Monday, June 20, 2011

integrating text into the Naval Reactors History Database

With my XTF presentation at Code4Lib Northwest completed, I've begun to do some more significant modifications of the XTF instance that supports the Naval Reactors History Database service. One need was prompted by the inclusion of text content, in the form of documents that describe NR's work in Project Prometheus.

Previously, the database was composed solely of image files, many of which containing internal text metadata that is indexed by XTF. Now, I'm adding textual content, in PDF format, to the index. This change introduced a problem: In a displayed record, the Matches field displays text snippets from the image metadata and text files. These two types (image and text) need to be differentiated, so the display is more comprehensible.

Solution: Modify file resultFormatter.xsl so that the Matches display is contextually customized. Two xsl:if elements are added, with the differentiation based upon the data in the XML metadata's Dublin Core Type field:

I'm continuing to look at XTF programming possibilities, including those described in Rowan Brownlee's XTF guide.

Also, I have to say that in coding this change, I took the NRHDB site down for several seconds or even minutes. For that reason, I'll be testing changes in a test instance, running parallel to the production site, in order to eliminate downtime. I can do this using EC2 micro instances that I terminate upon completion of testing. It's at this point that the open source/cloud blend is most advantageous - instead of licensing commercial digital collections software to support production and test and running both servers locally, I can run the production instance 24/7/365 using an Amazon Web Services EC2 Reserved Instance and spin up an EC2 micro instance on demand for several hours to customize and extend XTF as needed. And since XTF and its supporting components are all open source, there's no software costs for this work.

Wednesday, June 15, 2011

gaaa....Powell Technical Books is closed


In yet another sign of the times, Powell's Technical Bookstore is now closed. This happened last fall, I believe, but I just came across it when attending Code4Lib Northwest in Portland earlier this week.

There is a much-smaller (relative to the previous Technical Books) Powell's 2 location, with computer and science books. This store is on the same block as the large Powell's store.

I'm learning to love reading on my Kindle, but I'll miss the bricks-and-mortar stores, no doubt.

Tuesday, June 7, 2011

customizing stop words in XTF

I've been immersing myself in the inner workings of the California Digital Library's XTF platform. I expect to make a number of changes to my XTF-based Naval Reactors History Database service in the next few months, in preparation for a fall LITA Forum presentation. The change described in this post is actually pretty trivial - adding a customized stop words list for an XTF instance - but it illustrates the kind of back-end customizations that are possible.

I decided to use the stop words list provided on the SEO Tools website.
To employ the index in XTF, I copied the file to xtf/conf/stopwords directory, replacing the existing stopwords.txt file that was included in the release version of XTF with the one that I obtained from the SEO Tools site.

I then stopped Apache Tomcat and rebuilt the XTF index. A clean build is recommended, as described in this XTF users group post. (I received the error described in the message before restoring to a clean build.) Upon restarting Tomcat, the new stop words list is in use.

Tuesday, May 3, 2011

using Amazon Web Services features to improve EC2 and EBS resource durability

I run the Naval Reactors History Database, a hobby project, on Amazon Web Services resources. This includes a Linux server, persistent storage, and an IP address. I run them out of the AWS East region, which was the region at the center of the recent and significant AWS outage. While my online resource never had an outage, to the best of my knowledge, it's clear that it could have been impacted because of the EBS control plane that supports availability zones across the region. Also, Amazon announced that a small amount of EBS volume data had been lost in the affected Availability Zone.

So, this weekend I spent some time thinking about preserving the work that I've done with my Naval Reactors project. This is in the context of an online database that I'm slowly building, with objects being added and updated as I find the time on weekends and evenings. In short, it's a fairly static resource. In his book on Amazon Web Services, Jeff Barr notes the importance of creating lists. That's what I hope to have out of my own work - a set of lists that I create and can use in order to recover from AWS outages like the one that occurred last month.

So, to begin. First scenario: I am running an m1.small Linux instance in the us-east-1d Availability Zone (AZ). It's possible for me to launch and test a copy of my current server in another East availability zone. All of this work is done in the Amazon Web Services console, so it's quite quick and easy:

1. From Instances: Create an AMI from the running instance (what I'll call the production instance). There is a short (estimate 1-2 minutes) of server downtime as the AMI is generated.

2. From AMIs: Choose to launch an instance from the newly-created AMI. When going through the creation steps, I change the default selection for the AZ and choose to run the new instance in us-east-1b. I choose to keep the same Key Pair Name and Security Group as I have for the production instance.

After launching the instance, I have an EBS-backed Linux instance running in us-east-1d (production) and an instance running in us-east-1b (backup). The AZs have independent power and network connectivity. While the incident report describes how problems in one AZ can potentially impact others in the region, having this server running in another AZ provides a resource backup and a method for bringing my online database back online in the event of an outage.

3. Using the public DNS address, I test access to the Tomcat-based Naval Reactors History Database on the backup server - with success.

Note: I didn't generate an Elastic IP address for this instance. First, it wouldn't make sense in the context of my use - I would use the Elastic IP address currently mapped to the production server and would map this address to the backup server in the event of an outage. But second, you should be aware that you will be charged for an unused Elastic IP address that you've allocated to your account.

4. Stop the EBS-backed backup server instance.

Result: production server running in AZ us-east-1d; backup server stopped, but ready to start and serve resources, in AZ us-east-1b.

I performed all of these steps successfully today.

---

This is one method of providing redundancy. But I want to come up with something a little more sophisticated - in part because I am interested in moving to a new server OS and more robust EC2 platform in the future. Here's a second scenario, in which I build a new instance to host the collection and attach a volume with the needed data to it - all in a different AZ than the production server is running in.

Steps 1-5 and 8-9 below are performed in the AWS Management Console - including step 5, which I'll comment on later.

1. From Volumes: Create a snapshot from the production server's EBS volume.

2. From Snapshots: Create a volume from the EBS snapshot. Again, since the production server is running in us-east-1d, I create the new volume in us-east-1b.

3. From AMIs: Find the right AMI for the future production server. In my case, I'm looking for a Linux OS AMI that I'm comfortable with, preferably with Apache Tomcat preloaded.

4. From AMIs: Launch an instance using the AMI found in step 3. I'll be mounting the volume created in step 2, so I will manually set the instance's AZ to us-east-1b.

5. From Volumes: Here, I will attach the volume created in step 2 to the instance launched in step 4.

6. In the new server's Linux OS, Create the mount point location and mount the attached volume:

mkdir /mnt/prodata
mount /dev/sdf /mnt/prodata

7. Copy the XTF and Naval Reactors History Database files from the just-mounted volume to the Tomcat location on the new production server.

8. After testing, use this server as the new production server and map the Elastic IP address to the server.

9. Stop, and terminate when comfortable doing so, the previous production server.

I had initially planned to perform step 5 using the EC2 command line tools, but it's vastly easier to use the AWS Management Console.

---

My conclusions: The second method provides an important foundation for ensuring the durability of my EC2-hosted online collection. Amazon's detailed report on last month's outage includes this statement: "For example, when running inside a Region, users have the ability to take EBS snapshots which can be restored in any Availability Zone...."

I'm still exploring how to best automate the process of creating snapshots and restoring a volume in an AZ that's different than the production server is running in. What I have, as a primary protection at this point: From the second procedure, my production service running in one AZ and an EBS volume containing my application and online collection data has been restored and is available for use in a second AZ.

Also, I plan to do more reading on AWS best practices. I'm sure that I can improve upon the above procedures, but this is what I came up with based upon my current knowledge.

Tuesday, April 19, 2011

last day as Northwest Digital Archives database manager

Today, the Northwest Digital Archives infrastructure was transferred from Washington State University to its new home at the Orbis Cascade Alliance. I've learned a lot supporting the NWDA since 2003 - it shapes my work, from the management of digital resources to my recently-started hobby project, the Naval Reactors History Database.
Right now, I'm reading an article, in the current Library Resources & Technical Services, that helps to illustrate the important role that archivists can play in the current preservation demands that research libraries face - including the preservation of research materials in digital format. The article is authored by a team that includes archivists and a librarian, and the literature review cites work that I've become familiar with through the NWDA program (such as the Trusted Digital Repository Checklist, or TRAC).
It was a fun, but challenging experience; I'm sure the NWDA program will prosper in the years ahead.

Sunday, April 10, 2011

worldcat local usability results from OCLC

I've had the chance to read the most-recent usability results for WorldCat Local. I found this document extremely helpful, particularly since this round draws upon usability testing performed on 40 faculty, student, and library staff users at five academic institutions. It's completely logical for OCLC to perform this testing at the network level; in today's tight fiscal environment for libraries, this work is more important than ever.

Washington State University's WorldCat Local service is constantly evolving. A good example of this is enabling access to electronic resources. On this topic, the report notes that users prefer hyperlinks to an OpenURL button on the WCL detailed record display. This is problematic, because in some cases, the links provided in the OpenURL resolver are more accurate than hyperlinks that appear on the detailed record display. A good example of this at Washington State is the PsycArticles database records: the Findit! button presents a link that provides reliable one-click access to the full-text article. The links on the detailed record display, on the other hand, take the user to the journal index page for the CSA and EBSCO databases. This finding underscores the importance of my institution's current work in populating the WorldCat Knowledge Base and using its links over those scraped from the legacy Millennium library system. The study shows that users did feel comfortable in clicking on the WCKB links for electronic resources (which include the words "Full-text" and info on the provider and database).

The concept of "local" receives a lot of attention in the report and it's especially important at the WSU Libraries, where one WorldCat Local instance supports multiple campuses (although the Vancouver campus has built a campus-specific WCL instance, other WSU campuses have not). Testing clearly revealed that users tend to read "local" as the local campus and, in this context, the WCL first-level holdings presentation can be confusing. This would be ameliorated by having a WCL instance for each campus and, as a unified discovery and fulfillment system, this approach makes sense.

There is a discussion on the editions support in WorldCat Local, which is still problematic. The report describes an OCLC goal for presenting editions: the most recent, locally-held edition should be displayed in the detailed record by default.

The report also describes some label changes that have been made in the last year, on several displays, and the fact that format facet support is now available for the "all editions and formats" display. I found it useful to review these, particularly in showing the ongoing evolution of the WorldCat Local service and in understanding the motivation behind each change.

There are two areas that aren't covered in this report, and which are serious issues in terms of WorldCat Local's acceptance at research and academic institutions. The first is duplicate records. This is becoming more problematic as more institutions, like the University of Washington and Washington State University, add third-party databases to their WorldCat Local default search. This leads to more duplicate records in search results. It's my understanding that OCLC is working on a de-duplication strategy that addresses duplicates at time of record loading. I see duplicate records as a manageable problem, though, because modern search services, like Google, can present duplicate entries to users.

The second area is more difficult - the lack of any hits-in-context support. Not only does this impede buy-in to WorldCat Local, but this lack of functionality makes WorldCat Local an outlier relative to other modern search services - which are presenting information on why a record was retrieved. I'm not certain that the seriousness of the hits-in-context problem is clear to the WorldCat Local discovery team. In contrast, the report notes that known-item searching performance is a problem that OCLC is working to address.

Despite these two omissions, this report is very valauble to my institution as the WSU Libraries moves forward to enhance search and fulfillment services. Assessment is another discovery/fulfillment-related task that can, to a significant degree, be moved to the network level. This is shown by the last two summary reports created and released by OCLC.

Monday, April 4, 2011

data curation profiles workshop

This morning, I'm attending a workshop of data curation profiles at the University of Washington. Based upon my reading, and prior to the session's start, I can say that this is a little out of my comfort zone. This site provides a good launching point for the purpose of and past work with data curation profiles.

Saturday, March 12, 2011

a great book for the fundamentals of Amazon Web Services...

I read Jeff Barr's book on Amazon Web Services (Kindle edition) - using a Droid phone and a Kindle device. My take thus far: this is a great resource for getting a grip on the fundamentals of Amazon Web Services. I worked extensively with some AWS services (S3, SimpleDB, and EC2) and applications like SDB Explorer and Bolso prior to reading the book, but I can still say that I've learned quite a bit from this book.

The code for the programming examples is in PHP. Making the examples work on your system requires creativity at some points. The author states his technical expectations early in the book and it's expected that the reader has a base proficiency with PHP, along with some system administrator skills. I found it fastest to download and use the CloudFusion code on GitHub; I'm running the examples on a Fedora 8 EC2 instance.

Barr uses the "programmable data center" concept to explain cloud computing, and when you get to chapter 5 and run the EC2 API example in PHP, it's an empowering feeling. At that point, theory and practice come together. This script ran successfully (launching an EC2 instance, claming and mapping an IP, then creating and attaching EBS volumes programatically) the first time I ran it. Likewise, the modular application (using SQS) that scrapes web site images and creates composite pages gave me a sense of AWS' potential.

I have to say that I hit a wall just beyond the halfway point in this book, when it came to getting the AWS CloudWatch code functioning. At this point, to move forward with this section (CloudWatch, EBL...), I feel that I'll have to learn more about the CloudWatch API. In this case, I feel that the book and code sample explanations for working with the CloudWatch API are insufficient. (I got the command line tools for CloudWatch running, but not the PHP code example, particularly listing measures. Couldn't fix it by googling or reading on the AWS PHP SDK documentation.) In short, I had to skip most of the chapter and just go on.

There are chapters covering SimpleDB and the RDS (Relational Database Service). For SimpleDB: I found myself using the book examples as a launching point for building my own small SimpleDB applications. You can take Barr's code examples and the SimpleDB API documentation at URL http://docs.amazonwebservices.com/AmazonSimpleDB/2007-11-07/DeveloperGuide/ (API Reference/Operations) to programatically work with SimpleDB as needed for your project. That is, you can use the object-oriented code examples and the Request and Response information to create the needed applications - which illustrates the programmable data center concept better than anything.

Overall, highly recommended. I found some challenges in working with the Kindle edition, in terms of viewing illustrations. I found it possible to run most, but not all, of the code examples in the book. And I'm much more proficient with and knowledgeable about AWS than when I started readng it.

Thursday, January 20, 2011

my thoughts on the LITA streaming incident

As many librarians know, there was an incident at the LITA board meeting at ALA Midwinter in San Diego, during which the board voted to shut down a live stream of the meeting. This article provides a good summary of the incident and references to some editorial pieces on it.

I did not have a strong feeling about this incident going into the LITA Town Meeting on ALA Monday morning. I did, however, ask questions about it at my large table, because I had, in the previous two days, read Twitter posts describing the incident. The answers that I heard, the discussion at the meeting (including info provided by several board members taking the floor and speaking) and the message that Karen Starr sent me (as a LITA member) yesterday all had the same effect on me: the longer they talked, the angrier I became with the LITA board's cutoff decision and with its attempts at moving beyond the incident.
  • The primary remedy described in the letter, the proposed creation of a content streaming task force, appears, from the charge, centered on programs - not meetings; it was a board meeting that's at issue in this incident. The "ancilary events" mentioned at the conclusion of the document also seems program-centered ("author/presenter chats").

  • There is no suggestion that LITA member input is needed - or even wanted - in determining the composition of the content streaming task force. It's implied that "the Board" will draw upon the talent available in LITA - but only as its current members choose.
Really, each LITA member must decide his or her own response. I trust that members will do what's best for themselves and do what they think is right. There are opportunities for service in LITA, in non-library technical associations, and in organizations like ACRL. I believe that the January 19 letter, apparently the product of careful reflection by the current LITA board, illustrates the limits of contributing through LITA more clearly than the streaming cutoff decision itself.

Also, I urge librarians who have not yet seen the tape of the board meeting and the cutoff decision to watch it: URL http://www.ustream.tv/recorded/11892303.

Wednesday, January 19, 2011

ala midwinter 2011

Attended the 2011 ALA Midwinter meeting in downtown San Diego.

I simultaneously attended the OCLC symposium on transformational literacy (with Dr. Mimi Ito as the primary speaker) and monitored the RMG President's Panel on Twitter. Dr. Ito asserted that we need to train students to be lifelong learners, so that they can adapt to jobs that haven't been created yet. It's both humbling (building services to educate students in order to prepare them for the jobs of the future) and frustrating, as voters and their elected officials have turned against public education. I happened to be reading Remix at the time I attended this session, and Ito described and showed examples of transforming works - "the genie is out of the bottle...the sharing and appropriation is going to continue."

And, as far as the RMG panel: The tweets were enlightening; here's the report of one attendee who was tweeting during the session; a number of other attendees were tweeting their thoughts as well, many of which reflected frustration with existing ILS vendors, whose reps were speaking at this event.

I attended a session on bX Recommender (Nettie Legace of Ex Libris; John McDonald of Claremont College). At the WSU Libs, we have licensed bX and experimented with displaying recommendations in the SFX context menu. One of the interesting possibilities described by Legace is using the bX API to create a widget that displays recommendations in other contexts - for example, current "hot" articles in a discipline, based upon click activity at the institutions contributing usage data to bX.

I presented at the OCLC Cooperative Platform presentation, along with OCLC's Robin Murray and Kathryn Harnish. First, I was impressed at the turnout - approximately 50 attendees for an initiative that's still in the pilot stage. During my remarks, I contrasted the black box, proprietary architecture of current ILS systems with the Cooperative Platform. I cited the III Millennium bursar's office functionality (which is licensed and employed at WSU) as an example, and compared it with the possibility of using local and community development to build the same functionality - and to do it in a way that enables ongoing improvement.

I attended an Ex Libris presentation (Susan Stearns, John Larson) on its next-generation system, Unified Resources Management (URM)/Alma. URM is the larger, services-based architecture; Alma is the cloud-based library management system. Alma does have significant parallels with the OCLC WMS initiative. One of the goals of Alma is to turn format-based vertical silos to service-based horizontal workflows. Also, the presenters showed bibliographic records being maintained in a community (group) context. Finally, the infrastructure for Alma is cloud-based, and I was intrigued that Ex Libris is using Amazon Web Services EC2 to support its early adopter testing of the Alma system (though the long-term hosting arrangements haven't been determined). The Alma development seems significantly behind OCLC WMS, based upon the description at this session, with general release expected in 2012.

I participated in the WorldCat Navigator User's Group meeting. At this meeting, Christa Starck of OCLC described the Navigator enhancements coming in 2011.

I presented at the OCLC Developers Luncheon on some local development efforts that Jon Scott and I have done related to autocompletion in WorldCat.org-based systems. My presentation slides are online in Research Exchange.

I attended the public session for OCLC Web-scale Management Services (WMS), which was led by Andrew Pace and included three presenters from early adopter institutions - Jason Griffey (UTC), Jackie Beach (CPC regional library system), Michael Dula (Pepperdine). All three of the presenters were positive about the current or eventual success of their WMS migrations. What's most intriguing to me, listening to Pace's remarks, is the possibilities offered by a workflow engine - enabling a management system build around more customizable workflows. I thought about this in preparing for the Cooperative Platform presentation and the relatively closed-architecture system (Millennium) that we employ at WSU for our management system. How much is our current organizational structure, the way we do our work every day, driven by the management system? What if we could tailor it to our needs, instead of setting up workflows to work with a more inflexible system (albeit one with rich staff-side functionality)?

Then, on Monday morning, I attended the LITA Town Meeting. There was a lengthy discussions on the fact that the live stream and recording for a Saturday AM LITA board meeting. I'm of two minds on this - on one hand, I do believe that rules should be followed. There was info presented at the Town Meeting that made it clear that saving/distributing a recording to the meeting is not acceptable under ALA rules. The live streaming part, I'm less clear about. But on the other hand, I did get a sense from those who supported the stream cut-off that there's not intense interest, given the technical difficulties and possible financial repercussions (in enabling ALA participation without registration and attendance), in actually pushing a change to rules related to streaming of meetings.

[Postscript: There was a LITA message/press release today, from Karen Starr, LITA President, that described the policy issues in greater detail. The text is at URL http://litablog.org/2011/01/lita-board-affirms-openness-and-transparency/. I do think that my interpretation of the meeting, described above, is accurate.]

All in all, it was a very good conference for me. It was the last conference with my BlackBerry, so I'm certain that the future exhibit floor photos won't be so blurred. I do believe that we are on the brink of generational change in library systems, as management systems shift from locally- to cloud-based; as libraries gain the ability to customize their own workflows and to manage all formats equally; and, as the pendulum in library automation work shifts from infrastructure maintenance to the creation of services. I am glad to see vendors like OCLC, Ex Libris, and Equinox embracing this vision, albeit with different approaches. And I'm confident that libraries will, for the most part, move their services to more forward-thinking vendors.

Friday, January 14, 2011

naming confusion...

Marshall Breeding posted an Ex Libris press release today, which describes an apparent name change for the vendor's next-generation library management solution, from URM to Alma. I am scheduled to attend a breakfast describing URM on Sunday morning, but the branding folks have other ideas....

Along the same lines, I've found OCLC's designation of WorldCat.org content, as the vendor simultaneously makes third-party content available through WorldCat Local, to be another confusion generator. It's more confusing than it sounds on the surface, and it's very difficult to obtain accurate and current information on the content (databases, approximate number of objects) that makes up WorldCat.org.
---
Postscript: Okay, I did attend this Ex Libris session in San Diego. It's now my understanding that Alma is narrower, that URM is a broader technology initiative by Ex Libris, while Alma involves the cloud-based management system piece.