Public Domain Day (January 1, 2015): what could have entered it in 2015 and what did get released

Every year, January 1st also marks works from around the world that would be entering the public domain thanks to the copyright laws in theirs respective countries.

Public Domain Reviews put a list of creators whose work that are entering the public domain. http://publicdomainreview.org/collections/class-of-2015/ (Kandinsky! Whooh!)

Center of Study for the Public Domain put a list of some quite well-known works that are still under the extended copyright restriction: http://web.law.duke.edu/cspd/publicdomainday/2015/pre-1976

John Mark Ockerbloom from the University of Pennsylvania pointed out that EEBO is now out and, among other things, promoted several alternatives to http://everybodyslibraries.com/2015/01/01/public-domain-day-2015-ending-our-own-enclosures/

The sound version of a Google (old) reCAPTCHA

Last month, Google announced the new no-captcha reCAPTCHA that is supposedly more accurate and better at preventing spams. We’ll see how this goes.

In the mean time, plenty of websites that employ Google’s reCAPTCHA still use the old version like this:
google old recaptcha

The problem with this reCAPTCHA is that it fundamentally doesn’t work with screen readers (among other things, like forcing you crossed your eyes trying to figure out each character in the string.) Some people pointed out that reCAPTCHA offers the sound version (see that little red speaker?) that should mitigate the problem.

Here’s the link to sound version of a Google reCAPTCHA: https://cdn.10centuries.org/CQgzKt/6af0705ca97aa14b2d08ed3a2f58a0f8.mp3

This example was taken from the PubMed website and happened to be set as a string of numbers.

Enjoy!

p.s. what is this a about PubMed using inaccessible reCAPTCHA? There are other ways to employ non-captcha security techniques without using that kind solution. :-/

p.p.s. In case you’re curious, I could not decipher two out of the eleven (if I counted it correctly) numbers said in that recording.

Have you ever in a place where you looked up and something just took your breath away?

old window with a backdrop of hibiscus plant shadow
Taken on November 21, 2014 morning

The hidden meaning of “a great degree of flexibility and customization”

Code4Lib mailing list has an interesting discussion about a discovery layer for Primo. This particular discussion piqued my interest not because of the technical content, but for what’s not actually being discussed. Here’s the sentence that intrigued me (italic part is mine):

We use Alma/Primo here at California State University Sacramento and are finding a great degree of flexibility and customization of the local collections.

Flexibility and customization! I do like this. However, something else nagged me as well. Admit it, most of us are tinkerer. We like the idea that we can customize anything to make sure the relevant information will be displayed properly, with additional bells and whistles if needed. We cherish the idea of “freedom” in this area, where we can basically create a “perfect” user interface without being constrained by the vendor’s product. After all, each library is different and cookie-cutter templates could never satisfy us.

Here lies the hidden meaning of the freedom that we are so wanted: we better know what we’re doing. There will be a time we have to devote a lot of our time for the panning and designing, and making careful considerations we have to work on to make the product work effectively. Anybody whose work deals with information architecture and/or user experience knows this. Design decision should be based on usability study, data analysis, and users research –understanding how our users would interact with our web presence. Most of us already have data from our web logs; our face-to-face or virtual interactions with users who are attempting to use our web presence gave us indications the pain points of our website; and, if we’re lucky, we already did one or two usability studies of our web presence.

However, when it comes to working on a totally new service with new web presence, do those data and the analysis we did apply to this new design? How do we exactly go about designing a totally new user interface? There is no easy answer to that. It is always a good thing to involve our user from the beginning, getting their input and and trust their opinion. Or create stories of personas (stake holders) and use them at least as a starting point. And this is probably where the paradox are happening. We know our services and collections, and we know our systems. So we design how we present our collections and services based on our previous understandings about our past users, who might or might not still relevant.

[lost my thought here. it might come back later. someday.]

On information seeking report

The Project Information Literacy released their research report titled “Lessons Learned: How College Students Seek Information in the Digital Age” in 2009. The PDF report can be found at http://projectinfolit.org/pdfs/PIL_Fall2009_Year1Report_12_2009.pdf.

What makes this report interesting is that the group also try to dig deeper on how students developed their strategy in their information needs both for their course-related works and everyday life. In general, the students use course readings, library resources, and things like Google and Wikipedia when conducting course-related research. They tend to use Google, Wikipedia, and friends when it came to everyday life research.

One of the findings is that students tend use the course readings first for their course-related research. This seems a no brainer to me. After all, the faculty is their “first contact” in the courses they take.

The report also suggests the differences between the guides that librarians provided and the strategy employed by the students. “All in all, the librarian approach is one based by thoroughness, while the student approach is based on efficiency.” (page 20.) This seems to line up nicely with what Roy Tennant wrote many years ago that “only librarians like to search; everyone else like to find.” (Digital Libraries – Avoiding Unintended Consequences, http://www.libraryjournal.com/article/CA156524.html)

As a side note, I’m curious about the time and effort on researches being done in learning students information seeking behavior. Public Services librarians seem to understand this already based on their interaction with the students. Interestingly enough, most of library collection decisions are based on faculty research needs. So, I wonder how the familiarity of the resources affects the faculty’s decision in constructing their course readings and whether it might also affect the student behavior in their information seeking.

All in all, this is their ultimate conclusion:

This is our ultimate conclusion: Todayʼs students are not naïve about sources, systems, and services. They have developed sophisticated information problem-solving strategies that help them to meet their school and everyday needs, as they arise.

The report came up with several recommendations and one of them gave me a pause:

We have come to believe that many students see instructors—not librarians—as coaches on how to consult research. This situation seems to occur whether the faculty may qualify as expert researchers in the area of student research methods, or not. Librarians and faculty should see the librarian-student disconnect as a timely opportunity, especially when it comes to transferring information competencies to students.

We recommend librarians take an active role and initiate the dialogue with faculty to close a divide that may be growing between them and faculty and between them and students—each campus is likely to be different. There are, of course, many ways to initiate this conversation that some libraries may already have in use, such as librarian-faculty roundtables, faculty visits, faculty liaison programs, and customized pathfinders to curriculum, to name but a few. And there is always room for creating new ways to facilitate conversation between faculty and librarians, too. No matter what the means of communication may be, however, librarians need to actively identify opportunities for training faculty as conduits for reaching students with sound and current information-seeking strategies, as it applies to their organizational settings.

Personally, I have no objection with the recommendation above. After all, that’s why we (the librarians) are here for. However, the recommendation above basically takes for granted that narrowing or closing the librarian-student disconnect would actually improve the outcome of the students research. Or, in other words, nowhere in the report indicated that this disconnect bring “harms” to the students outcome. It would be nice to see some kind of assessments on this.

snowy tree

snowy_tree.jpg

URL shorterner’s life

I was perusing some emails that came from a mailing list, old blog posts that I bookmarked, and old tweets that I favorited. Many of them contains somekind of link shorterners like tinyurl, bitly, and t.co.

While the URL shorterners are still functioning just fine, the actual URL themselves are not always so and sometimes I get a 404 error message from the target website. I know link rot happens, but somehow this irked me.

Web Services related terms

(just pulling out stuff from what my brain can come up with at the moment)

API – CSS – DTD – EDI – ElasticSearch – HTML – JSON – Linked Data – Mashup – Metadata – Microformats – OAI – OASIS – openURL – OSS – PURL – REST – SaaS – Semantic Web – SOAP – Solr – SRU – SRW – URN – W3C – WAI – WSDL – XML – XPath – XQuery – XSLT – YAZ

text comparison tool

Bookmarking:

Pretty Diff: http://prettydiff.com/
Text Comparion: http://www.textdiff.com/

Tools I use when performing accessibility assessment

Below are list of variety of tools I use when doing an accessibility assessment for our web precense. I don’t use all those tools all at once, though. :-) The tool I used the most are WAVE and WebAnywhere for a quick test. WAVE is most useful to inform the web developers if they’re missing anything, and WebAnywhere is most useful to show how a screen reader would operate on the site. For a thorough test, I collaborate with my blind student where I can observe her interactions with the e-resource’s user interface (in a way, doing a mini usability study) and note the “pain points” when she encountered difficulties in understanding the structure of the web pages on any given time, such as interacting with the search box, finding the relevant article from within the search results, finding the way to save and send the article citation to herself, read the article within the page, etc.

WAVE from WebAIM

http://wave.webaim.org
Their web-based tool works fine for websites that won’t need some kind of authentication such as open access e-resources like PubMed, etc. For subscription-based resources, especially if you append a proxy link on the e-resource, download and install their Chrome extension.

Functional Accessibility Evaluator (FAE) from UIUC

http://fae.cita.uiuc.edu/
This tool uses Illinois’ Web Accessibility requirements as their evaluation procedure, which tend to be more restrictive than other states. But it’s still a good tool. The explanation of the report is quite useful especially for website designer & developer.

Juicy Studio Accessibility Toolbar (Firefox extension)

https://addons.mozilla.org/en-US/firefox/addon/juicy-studio-accessibility-too/
I use this primarily for analysing color contrast. Many “modern” websites user grey font and sometimes with grey background, which makes reading the text is quite difficult for those with visual disability. Useful for checking the ARIA (Accessible Rich Internet Application) markups as well.

The three tools above would point out the coding problems, especially in the area of using proper tags, labels, etc. The rest of the tools below are useful to point out some user interaction challenges due to design decisions (information architecture, content structure within a page, etc.)

Keyboard manual operation

This is the simplest test. You just use the TAB and arrow keys on your keyboard to move around the page. Useful to check if the website has a “Skip to Main Content” option, especially if the site consist a lot of navigation links. It can be quite tedious if the site has a lot of links. But then you’d know the pain. ;-)

Fangs Screen Reader Emulator (Firefox extension)

https://addons.mozilla.org/en-US/firefox/addon/fangs-screen-reader-emulator/
This is probably the easiest tool to view how a screen reader might read the content of a website from top to bottom without user’s interaction. You’ll see them as a text rather than a voice over. If you do use this tool, please consider a donation to the developer.

SATOGO

http://www.satogo.com
SATOGO is a web-based screen reader. Pretty straightforward. You need to use IE and Windows OS, and download & install their file first. Create an account if you plan to use this service often.

WebAnywhere

http://webanywhere.cs.washington.edu/
Another web-based tool that would emulate a screen reader. Works pretty well, but cannot be used for resources that requires you to authenticate first (using the proxy link, etc.) or if the e-resource uses your IP address for authorization.

NVDA (NonVisual Desktop Access)

http://www.nvaccess.org/
Free screen reader, now it’s quite comparable to JAWS screen reader without the added $$$$. Works on Windows OS only. If you use this tool, please consider donating to the developer.

VoiceOver

For Mac/OSX users, the VoiceOver feature is quite useful. Follow their documentation on how to operate VoiceOver http://www.apple.com/voiceover/info/guide/