Posts Tagged ‘flickr’

Colours of a tag

Friday, May 14th, 2010

I’ve been expanding upon the experiments I presented at VALA earlier this year where I built a search by colour application for the National Library of Australia. Out of curiosity I built the same search by colour application using approximately 35,000 images from Flickr Commons.

Since building these applications I’ve been wondering, do certain topics (or tags) also relate to a colour? Does a search for Paris return the colourful images your imagination expects? Are images tagged with red really red?

With a bit of help from the Flickr API, I’ve built an application that queries the 50 most interesting Flickr Commons images for a particular tag, and displays the colours of these images. It also attempts to create a definitive colour for the tag by averaging the colours out.

As you explore the tags more & more you tend to find that most tags return an average muddy brown colour. I suspect this is partly to do with many of the images being black & white & skewing the process.

It’s really interesting to explore a few different subjects and seeing what results appear.

Formats

Can we find an colour gamut for a format?

Cities and countries

Do different cities or countries have different colours associated with them?

Objects

Do objects have particular colours associated with them? Take a bridge. Why do bridges exist? They exist to allow us to go over a river or a valley. With that logic we should expect photos tagged with bridge to have a reasonably large amount of green or blue in the image.

Sure enough, we get quite a few images with green and blue in them.

Colours

Of course colours are a natural subject to test.

Blue

Green

Red

Yellow

Have a go

Feel free to explore the application and find some interesting results. The URL is totally hackable if the tag you want to test isn’t part of the initial tag cloud.

Immediate sharing

Sunday, September 27th, 2009

This week the east coast of Australia was blanketed in a dust storm. The worst day was on Wednesday the 23rd when Sydney was blanketed in errie red dust. The social networks were bombarded with people’s accounts of the events.

I decided to do a little analysis of how quickly people reacted to the event & how quickly they shared their experiences of the event. Using the Flickr API I exported all the photos that had been taken on the 23rd of September that had been tagged with Sydney and dust. I then looked at how long it took people to upload the photos. Out of these photos I removed those photos where the user wasn’t displaying the EXIF metadata and those where the camera time was obviously set incorrectly (where the time the photo was taken was later than the time it was uploaded).

Time to upload days

The bulk of the photos were uploaded to Flickr within 24 hours of being taken, with very few photos being uploaded 2 or 3 days after being taken. It was an immediate action. I then looked in more detail at what happened with those photos that were uploaded within 24 hours of being taken.

Time to upload hours

51% of photos were uploaded to Flickr within 4 hours of when they were taken. Given the time of day when the dust storms were happening, as people were going to work, there is also a small increase in the number of photos being uploaded 10-15 hours later, which corresponds time wise to people uploading images later that evening when they arrived home from work, quite possibly the first opportunity they would have had to upload their images.

I also did some analysis on those photos that were uploaded in the first 4 hours of being taken. Did this immediacy relate to the type of camera used?

Camera type

24% of images didn’t have the model of camera recorded in their EXIF metadata. What was surprising was that only 6% of these rapidly uploaded images came from mobile devices like iPhone and Nokia mobile phone cameras. Over 50% of images came from digital SLR cameras while the remainder were mostly compact cameras.

This demonstrates a desire for us to be wanting to immediately share what is happening in our environment with a wider audience, but we aren’t sharing it using our mobile devices.

DigitalNZ location search

Thursday, June 18th, 2009

Over the past couple of months I’ve been building a little application using the API’s from the DigitalNZ project. DigitalNZ is a collaboration between government departments, publicly funded organisations, the private sector, and community groups to expose and share their combined digital content. Part of their plan to expose their data is to provide a publically available API for developers to expose their content in ways they may not have thought about.

Typically, a large dataset has a search box as it’s main interface. I wanted to get right away from that approach and create an engaging interface. This uses a map interface to allow the user to freely explore the content.

It currently uses a combination of API’s from Google and Flickr to convert a latitude and longitude from the map to obtain a place name. It then displays a shapefile from Flickr to approximate the area being searched, and returns a list of relevant results from DigitalNZ. Since I started work on this, the data returned from both of these API’s have been released under a Creative Commons license (Yahoo have released their geoplanet data and Flickr have release their shapefile data). I’ll end up incorporating these releases into the application rather than relying on the API’s for the functionality.

Explore the contents of DigitalNZ.

DigitalNZ

How libraries can learn from Twitter

Friday, May 29th, 2009

This morning an interesting Tweet arrived on a subject that I’ve been thinking about quite a lot lately:

there seem to be more people using twitter apps than twitter web. What is twitter doing wrong?
@katykat

In April 2008, ReadWriteWeb carried out a study How We Tweet: The Definitive List of the Top Twitter Clients that showed that only 56% of Twitter users used the web interface. My gut feeling tells me that that figure is lower now, given the growth of use of devices like the iPhone. 

This is a perfectly valid question to be asking in the context of a traditional website. But Twitter isn’t a website, it’s more than that, it’s a service like email. You are not restricted to interacting with your email via one particular method. Likewise, by building upon Twitter using their API’s you are not restricted to using their service in the one and only way that you can, you have choice in how you interact with their service. The important thing isn’t the website, it is the service. Twitter.com could basically become a one page website and as long as the API’s were maintained the service would continue as normal for much of the twitter community. The user has a variety of choices in interacting with the service based upon their personal preferences. They can choose the relevant application based upon interface they like and the features they are going to use.

Flickr, despite having a far greater number of API’s available, hasn’t followed the same path as Twitter. Most people still interact with Flickr via the standard web interface. This is mostly due to their terms of use which forbids people replicating the user experience of Flickr:

Use Flickr APIs for any application that replicates or attempts to replace the essential user experience of Flickr.com

Rev Dan Catt who up until recently worked at Flickr said:

I’ve often joked that I could probably get more stuff done working with the Flickr API outside of Flickr than inside.

 So to answer the question, I really don’t think Twitter is doing anything wrong, they are doing everything right.

What can Libraries learn from what Twitter and to a lesser degree Flickr, are doing? Can we start to think about our catalogue (or other core services) not as a website, but as a service. The website version of the catalogue may just be one aspect of the delivery mechanism for the information we wish to distribute. Why can’t we look at providing our services to our users in any way they wish to be able to interact with them?

Why can’t we provide specialised access to our catalogues to specific user groups, so they (or anyone) can create:

  • a simplified interface for high school users without all the complex features they don’t use
  • allow an historical society to create an application based upon their needs
  • an complex view of the catalogue for academics or librarians
  • a visual or geographic based search
  • a social network based around the catalogue

Institutions like the Brooklyn Museum and collaborative efforts such as DigitalNZ are providing their content to developers to do exactly this sort of thing. It’s very early days still and it will be interesting to see what starts to develop.

Let’s start thinking about interacting with the service, not the website.

New York then and now

Tuesday, January 6th, 2009

I’ve been playing around with yet another Flickr Commons then and now project, this time using the images of New York from 1935-1938 from the New York Public Library.  The process for this has been a little bit different to the previous then and now demonstrations.  The images that have been posted don’t have any geo-location metadata (a latitude or longitude) so they can’t be placed directly on a map in the same manner as other Commons photographs.  What they do have instead, is very good street addresses in their titles.

The google maps API has geocoding API call that translates a human readable address into a latitude and longitude.  So if we pass the title of a photo into the API – let’s say “Willow Street, No. 113, Brooklyn”, it returns the latitude and longitude of “40.6978614, -73.9955804”.

For the demonstration I’m using a KML file.  Generating this file is now a 2 step process, import the data from Flickr using their API, pass the title of the photo into the Google Maps API to get the latitude and longitude and merge both results into a KML file.

Of course some of the titles provide ambiguous addresses or don’t provide enough information and don’t automatically return a result.  for some of the images I’ve manually tweaked the data that I’ve passed into the geocoding API to obtain a result.  The results are by no means perfect, but it’s a pretty good demonstration of what can be achieved from very little data and automating everything.

Please explore my New York then and now mashup and let me know what you think.

New York then and now