Wednesday, September 30, 2015

2015-09-30: Digital Preservation - Magdeburg Germany Trip Report


Dr. Herzog: This large green area on your left is Sanssouci Park. It has 11 palaces in it.
Yasmin: I want to visit this park after we are back from the university, can we?
Dr. Herzog: We sure can... I think we will be back before sunset.
Yasmin: I love beautiful things.
Dr. Herzog: Who doesn't?
Sawood: [Smiles]

The three souls were heading to the Hochschule Magdeburg-Stendal University from Potsdam, Germany in Dr. Michael Herzog's car for a lunch lecture on the topic of Digital Preservation. Yasmin and Sawood from the Web Science and Digita Libraries Research Group of the Old Dominion University, Norfolk, Virginia were invited for the talk by Dr. Herzog at his SPiRIT Research Group. The two WSDL members have presented their work at TPDL 2015 in Poznan, Poland then on their way back home they ware halted and hosted by Dr. Herzog in Germany for the lunch lecture. You may also enjoy the TPDL 2015 trip report by Yasmin.


Passing by beautiful landscapes, crossing bridges and rivers, observing renewable energy sources such as windmills and solar panels, and touching almost 200 km/h speed on the highway we reached to the university in Magdeburg. Due to the vacations there were not many people in the campus, but the canteen was still crowded when we went there for the lunch. Dr. Herzog's student, Benjamin Hatscher (who created the poster for the talk) joined us for the lunch. Then we headed to the room that was reserved for the talk and started the session.

Dr. Herzog briefly introduced us, our research group, and our topics for the day to the audience. He also shared his recent memories about the time he spent at ODU and about his interactions with the WSDL members. Then he left the podium for Yasmin.


Yasmin presented her talk on the topic, "Using Web Archives to Enrich the Live Web Experience Through Storytelling". She noted that her work is supported in part by IMLS. She started her introduction with a set of interesting images. She then illustrated the importance of the time aspect in storytelling and described how storytelling looks like on the Web, and especially on the social media. She discussed the need of selecting a very small, but representative subset from a big pile of resources around certain topic to tell the story. Selecting the small representative subset is challenging, but important task. This gives a brief summary as well as the entry point to deep dive into the story and explore remaining resources. She gave examples of how Facebook Lookback compiles a few highlights from hundreds or thousands of someone's sharings and 1 Second Everyday for storytelling. Then she moved on to the popular social media storytelling service Storify and described the issues in it such as flat representation, bookmarking not preservation, and resources going off-topic over time. This lead her to the description of the Web archives, Memento, and Web archiving services (mainly Archive-It). Then she described the shortcomings of the Web archiving services when it comes to storytelling and how it can be improved by combining the Web archives and the storytelling services together. After that she concluded her talk by describing her approaches and policies on selecting the representative subset of resources from a collection.


I, Sawood Alam presented my talk on the topic, "Web Archiving: A Brief Introduction". I briefly introduced myself with the help of my academic footprint and the lexical signature. The "lexical signature" term led me to touch on Martin Klein's work and how I used it to describe a person instead of a document. Then I followed the agenda for the talk and began with the description of the archiving in general, the concept of the Web archiving, and the differences between the two.


I then briefly talked about the purpose and importance of the Web archiving on institutional and personal scales. Then I described various phases and challenges involved in the Web archiving such as crawling, storage, retrieval, replay, completeness, accuracy, and credibility. This gave me opportunity to reference various WSDL members' research work such as Justin's Two-Tiered Crawling and Scott's Temporal Violations. Then I talked about existing Web and digital archiving efforts and various tools used by Web archivists in various stages. The list included vastly used tools such as Heritrix, OpenWayback, and TimeTravel as well as various tool developed by WSDL members or other individual developers such as CarbonDate, Warrick, Synchronicity, WARCreate, WAIL, Mink, MemGator, and Browsertrix. After that I briefly described the Memento protocol and Memento aggregator.

This lead me to my IIPC funded research work on Archive Profiling. In this section of the talk I described why archive profiling is important, how it can help in Memento query routing, and how does an archive profile look like.

To motivate the audience for research in the Web archiving field I discussed various related areas that have vast research opportunities to explore.

Then I concluded my talk with the introduction of our Web Science and Digital Libraries Research Group. This was the fun part of the talk, full of pictures illustrating lifestyle and work environment at our lab. I illustrated how we use tables in our lab for fun traditions such as bringing lots of food after a successful defense or spreading assignment submissions on the Ping Pong table for parallel evaluation. I illustrated our effective use of the white boards from "about:blank" state to the highly busy and annotated state and the reserved space for the "PhD Crush" that keeps track of the progress of each WSDL member in a visual and fun way. I couldn't resist to show our Origami skills on the scale of covering an entire cubicle and every single item in it individually.




After a brief QA session, Dr. Herzog formally concluded the event.


From there we all were free to explore the beauty of the places around and we did to the extent possible. We toured around the historical places of the Magdeburg city such as the Gothic architecture masterpiece, Magdeburg Cathedral and on our way back to the Potsdam we saw the newly built largest canal under-bridge, Magdeburg Water Bridge.


By the time we reached to Postdam the sun was already set, but we still managed to see a couple of the palaces in the Sanssouci Park and they were looking beautiful in that light condition. We even managed to take a few pictures in that low light.


Dr. Herzog invited us for dinner at his place and we had no reason or intention to say no. He was the head chef in his kitchen and prepared for us a delicious rice recipe and white asparagus (which was a new vegetable for me). Since I like cooking, I decided to join him in his kitchen and he gladly welcomed me. I did not have any plans in advance, but after a brief look inside his fridge I decided to prepare egg hearts and salad. During and after the dinner Dr. Herzog described and showed pictures of many historical places in Potsdam and made us excited to visit them the next day.


The next morning we had to head back to Berlin, but we sneaked a couple of hours in the morning to see the beauty of the Sanssouci Park and the Sanssouci Palace in the bright sunlight. A long series of stairs from the front entrance of the palace leading to the water fountain with stepped walls on both the sides covered with grapes vines were mesmerizing.


Dr. Herzog dropped us to the train station (or Bahnhof in German) from where we took train for Berlin. We got almost a day to explore Berlin and we did it the extent possible. It is an amazing city, full of historical masterpieces and the state of the art architecture. At one point, we got stuck in a public demonstration and couldn't use any transport due to the road jam, although, we had no idea what was that demonstration for.


Later in the evening Dr Herzog came to Berlin to pick his wife up from the Komische Oper Berlin where she was performing an Opera and we got a chance to look inside this beautiful place. This way we got a few more hours to have a guided tour of Berlin and had dinner in an Italian restaurant.


It was a fun trip to explore three beautiful cities of Germany immediately after exploring yet another beautiful and colorful city of Poznan, Poland. We couldn't have imagined anything better than this. I published seven photo spheres of various churches and palaces on Google Maps during this trip and got an album full of pictures.

On behalf of my university, department, research group, and myself I would like to extend my sincere thanks and regards to Dr. Herzog for his invitation, warm welcome, hosting, and spending time while showing us the best of Magdeburg, Potsdam, and Berlin during our stay in Germany. He is a fantastic host and tour guide. Now tuning back to the see off conversation among the three.

Sawood: Yasmin, now you know why Dr. Herzog said, "who doesn't" when you said, "I love beautiful things".
Yasmin: [Smiles]
Dr. Herzog: [Smiles]

--
Sawood Alam

Monday, September 28, 2015

2015-09-28: TPDL 2015 in Poznan, Poland


The Old Market Square in Poznan
On September 15 2015, Sawood Alam and I (Yasmin AlNoamany) attended the 2015 Theory and Practice of Digital Libraries (TPDL) Conference in Poznan, Poland. This year, WS-DL had four accepted papers in TPDL for three students (Mohamed Aturban (who could not attend the conference because of visa issues), Sawood Alam, and Yasmin AlNoamany). Sawood and I arrived in Poznan on Monday, Sept. 14. Although we were tired from travel, we could not resist walking to the the best area in Poznan, the old market square. It was fascinating to see those beautiful colorful houses at night with the reflection of the water on them after it rained with the beautiful European music by many artists who were playing in the street.

The next morning we headed to the conference, which was held in Poznań Supercomputing and Networking Center. The organization of the conference was amazing and the general conference co-chairs, Marcin Werla and Cezary Mazurek, were always there to answer our questions. Furthermore, the people at the reception of the conference were there for us the whole time to help us with transportation, especially with the communication with taxi drivers; we do not speak Polish and they do not speak English. On every day of the conference, there were coffee break where we had hot and cold drinks and snacks. It is worth mentioning that I had the best coffee I have ever tasted in Poland :-). The main part of the TPDL 2015 conference was streamed live and recorded. The recordings will be processed and made publicly available on-line on PlatonTV portal.

Sawood (on the left) and Jose (on the right)
We met Jose Antonio Olvera, who interned in WS-DL lab in summer 2014, at the entrance. At the conference, Jose had an accepted poster “Evaluating Auction Mechanisms for the Preservation of Cost-Aware Digital Objects Under Constrained Digital Preservation Budgets” that was presented at the evening of the first day in the poster session. It was nice meeting him, since I was not there when he interned in our lab.
The first day of the main conference, September 15th, started with a Keynote speech by David Giaretta, whom I was honored to speak to many times during the conference and had him among the audience of my presentations, talked about "Data – unbound by time or discipline – challenges and new skills needed". At the beginning, Giaretta introduced himself with a summary about his background. His speech mainly was about data preservation and the challenges that this field faces, such as link rots, which Giaretta considered a big horror. He mentioned many examples about the possibility of data loss. Giaretta talked about big data world and presented the 7 (or 8 (or 9)) V’s of big data: volume, velocity, variety, volatility, veracity, validity, value, variability, and visualization. I loved these quotes from his speech:
  • "Preservation is judged by continuing usability, then come value". 
  • "Libraries are gateways to knowledge". 
  • "Metadata is classification".
  • "emulate or migrate".
He talked about how it is valuable and expensive to preserve the scientific data, then raised an issue about reputation for keeping things over time and long term funding. Funding is a big challenge in digital preservation, so he talked about vision and opportunities for funding. Giaretta concluded his keynote with the types of digital objects that needs to be preserved, such as simple documents and images, scientific data, complex objects, and the changing over time (such as the annotations). He raised this question: "what questions can one ask when confronted with some completely unfamiliar digital objects?" Giaretta ended his speech with an advice: "Step back and help the scientists to prepare data management plans, the current data management plan is very weak".

After the keynote we went to a coffee break, then the first session of the conference "Social-technical perspectives of digital information" started. The session was led by WS-DL’s Sawood Alam presenting his work "Archive Profiling Through CDX Summarization", which is a product of an IIPC funded project. He started with a brief introduction about the memento aggregator and the need of profiling the long tail of archives to improve the efficiency of the aggregator. He described two earlier profiling efforts: the complete knowledge profile by Sanderson and minimalistic TLD only profile by AlSum. He described the limitations of the two profiles and explored the middle ground for various other possibilities. He also talked about the newly introduced CDXJ serialization format for profiles and illustrated its usefulness in serializing profiles on scale with the ability of merging and splitting arbitrary profiles easily. He evaluated his findings and concluded that his work so far gained up to 22% routing precision with less than 5% cost relative to the complete knowledge profile without any false negatives. The code to generate profiles and benchmark can be found in a GitHub repository.



Next, there was a switch between the second and the third presentations and since Sawood was supposed to present on the behalf of Mohamed Aturban, the chair of the session gave Sawood enough time to breathe between the two presentations.

The second presentation was "Query Expansion for Survey Question Retrieval in the Social Sciences" by Nadine Dulisch from GESIS and Andreas Oskar Kempf from ZBW. Andreas started with a case study for the usage of survey questions, which were developed by operational organizations, in social science. He presented the importance of social science survey data for social scientists.  Then, Nadine talked about the approaches they applied for query expansion retrieval. She showed that statistical-based expansion was better than intellectual-based expansion. They presented the results of their experiments based on Trec_eval. They evaluated thesaurus-based and co-occurrence-based expansion approaches for query expansion to improve retrieval quality in digital libraries and research data archives. They found that automatically expanded queries using extracted co-occurring terms could provide better results than queries manually reformulated by a domain expert.

Sawood presented "Quantifying Orphaned Annotations in Hypothes.is". In this paper, Aturban et al. analyzed 6281 highlighted text annotations in Hypothes.is annotation system. They also used the Memento Aggregator to look for archived versions of the annotated pages. They found that 60% of the highlighted text annotations are orphans (i.e. annotations are attached to neither the live web nor memento(s)) or in danger of being orphaned (i.e. annotations are attached to the live web but not to memento(s)). They found that if a memento exists, there is a 90% chance that it recovers the annotated webpage. Using public archives, only 3% of all highlighted text annotations were reattached, otherwise they would be orphaned. They found that for the majority of the annotations, no memento existed in the archives. Their findings highlighted the need for archiving pages at the time of annotation.


After the end of the general session, we took a lunch break where we gathered with Jose Antonio Olvera and many of the conference attendees to exchange our research ideas.

After the lunch break, we attended the second session of the day, "Multimedia information management and retrieval and digital curation". The session started with "Practice-oriented Evaluation of Unsupervised Labeling of Audiovisual Content in an Archive Production Environment” presented by Victor de Boer. In their work, Victor et al. evaluated the automatic labeling of the audiovisual content to improve efficiency and inter-annotator agreement by generating annotation suggestions automatically from textual resources related to the documents to be archived. They performed pilot studies to evaluate term suggestion methods through precision and recall by taking terms assigned by archivists as ‘ground-truth’. The found that the quality of automatic term-suggestion are sufficiently high.

The second presentation was "Measuring Quality in Metadata Repositories" by Dimitris Gavrilis. He started his presentation by mentioning that this is a hard topic, then he explained why this research is important. He explained the specific criteria that determine the data quality: completeness, validity, consistency, timeliness, appropriateness, and accuracy constituents. In their paper, Dimitris et al. introduced a metadata quality evaluation model (MQEM) that provides a set of metadata quality criteria as well as contextual parameters concerning metadata generation and use. The MQEM allows the curators and the metadata designers to assess the quality of their metadata and to run queries on existing datasets. They evaluated their framework on two different use cases: application design and content aggregation.

After the session, we took a break and I got illness which prevented me from attending the discussion panel session, which was entitled "Open Access to Research Data: is it a solution or a problem?", and the poster session. I went back to the hotel to rest and prepare for the next day's presentation. I am embedding the tweets about the panel and the poster session.

The next day I felt fine, so we went early to have breakfast in the beautiful old market square, then headed to the conference. The opening of the second day was by Cezary Mazurek who introduced the sessions of the second day and thanked the sponsors of the conference. Then he left us with a beautiful soundtrack of music, which was related to the second keynote speaker.

The Keynote speech was "Digital Audio Asset Archival and Retrieval: A Users Perspective" by Joseph Cancellaro, active composer, musician, and the chair of the Interactive Art and Media Department of Columbia College in Chicago. Cancellaro started by a short bio about himself. The first part of his presentation handled issues of audio asset and the constant problematic for sound designers and non-linear environments (naming convention (meta tag), search tools, storage (failure), retrieval (failure), DSP (Digital signal processing), etc. He also mentioned how do they handle these issues in his department, for example for naming conventions, they add tags to the files. He explained the simple sound asset SR workflow. Preservation to Cancellaro is “not losing any more audio data". His second part of the presentation was about storage, retrieval, possible solutions, and content creation. He mentioned some facts about storage and retrieval:
  • The decrease in technology costs have reduced the local issues of storage capacity (this is always a concern in academia). 
  • Bandwidth is still an issue in real-time production. 
  • Non-linear sound production is a challenge for linear minded composers and sound designers.
He mentioned that searching for sound objects is a blocking point for many productions, then continued "when I ask my students about the search options for the sound track they have, all what I hear are crickets". At the end,  Dr. Cancellaro presented agile concept as a solution for content management systems (CMS). He presented the basic digital audio theory: sound as a continuous analog event is captured at specific data point.

After the keynote, we took a coffee break, then the sessions of the second day started with "Influence and Interrelationships among Chinese Library and Information Science Journals in Taiwan" by Ya-Ning Chen. In this research, the authors investigate the citation relation between the different journals based on a data set collected from 11 Chinese LIS journals (2,031 articles during from 2001 to 2012) in Taiwan. The authors measured the indexer and the indegree, outdegree, and the self-feeding ratios between the journals. They also measured the degree and betweenness centrality of SNA to investigate the information flow between Chinese LIS journals in Taiwan. They created a 11 × 11 matrix that express the journal-to-journal analysis. They created a sciogram of Interrelationships among Chinese LIS Journals in Taiwan which summarized the citation relation between the journals they studied.

Next was a presentation entitled "Tyranny of Distance: Understanding Academic Library Browsing by Refining the Neighbour Effect" by Dana Mckay and George Buchanan.
Dana and George explained the importance of browsing books as a part of informations seeking, and how this is not well-supported for e-books. They used different datasets to examine the patterns of co-borrowing. They examined different aspects of the neighbour effect on browsing behavior. Finally they presented their findings to improve the browsing of digital libraries.

The last presentation of this session was a study on Storify entitled "Characteristics of Social Media Stories" by Yasmin AlNoamany. Based upon analyzing 14,568 stories from Storify, AlNoamany et al. specified the structural characteristics of popular (i.e., receiving the most views) human-generated stories to build a template that will be used later in generating (semi-)automatic stories from the archived collections. The study investigated many question regarding to the features of the stories, such as the length of the story, the number of elements, the decay rate of the stories, etc. At the end, the study differentiated the popular stories and the unpopular stories based on the main feature of both of them. Based on Kruskal-Wallis test, at the p ≤ 0.05 significance level, the popular and the unpopular stories are different in terms of most of the features. Popular stories tend to have more web elements (medians of 28 vs. 21), longer timespan (5 hours vs. 2 hours), longer editing time intervals, and less decay rate.

 

After the presentation, we had lunch, when some attendees extended the talk about my research. It was a useful discussion regarding to the future of my research, especially integrating data from the archived collections with the storytelling services.

The "user studies for and evaluation of digital library systems and applications" session started after the break with "On the Impact of Academic Factors on Scholar Popularity: A Cross-Area Study” presentation by Marcos Gonçalves. Gonçalves et al. presented a cross-area study on the impact of key academic factors on scholar popularity for understanding how different factors impact scholar popularity. They conducted their study based on scholars affiliated to different graduate programs in Brazil and internationally, with more than 1,000 scholars and 880,000 citations, over a 16-year period. They found that scholars in technological programs (e.g., Computer Science, Electrical Engineering, Bioinformatics) tend to be the most "popular" ones in their universities. They also found that international popularity in still much higher than that obtained by Brazilian scholars.

After the first presentation, there was a panel on "User studies and Evaluation" by George Buchanan, Dana McKay, and Giannis Tsakonas, and moderated by Seamus Ross as a replacement of two presentations due to the absence of the presenters. The panel started with a question from Seamus Ross: Are user studies in digital libraries soft? Each one of the panelists presented his point of view on the importance of user studies. Buchanan said that user studies matter, then Dana followed up that we want to create something that all the people can use. Tsakonas said he did studies that never developed into systems. Seamus Ross asked the panelists: what makes a person a good user study person? Dana answered with a joke; "choose someone like me". Dana works as User Experience Manager and Architect at the academic library of Swinburne University of Technology, so she has experience with users needs and user studies. I followed up the discussion that we do users studies to know what the people need or to evaluate a system, then I asked if Mechanical Turk (MTurk) experiments is a form of user studies. At the end, Seamus Ross concluded the panel with some advice on conducting user studies, such as considering a feedback loop in the process of user study.

After the panel, we had a coffee break. I had a great discussion about user evaluation in the context of my research with Brigitte Mathiak, who gave me much useful advice about evaluating the stories that will be created automatically from the web archives. Later on my European trip I gave a presentation at Magdeburg-Stendal University of Applied Sciences that gives big picture of my research.

In the last session, I attended Brigitte Mathiak presented "Are there any Differences in Data Set Retrieval compared to well-known Literature Retrieval?". In the beginning, Mathiak explained the motive of their work. Based on two user studies, a lab study with seven participants and telephone interviews with 46 participants, they investigated the requirements that users have for a data set retrieval system in the social sciences and in Digital Libraries. They found that choosing the data set is more important to researcher than choosing a piece of literature. Moreover, meta data quality and quantity is even more important for data sets.

At the evening, We had the conference dinner which was held at Concordia Design along with beautiful music. At the dinner, the conference chairs announced two awards: the best paper award for Matthias Geel and Moira Norrie on "Memsy: Keeping Track of Personal Digital Resources across Devices and Services" and the best poster/demo award for Clare Llewellyn, Claire Grover, Beatrice Alex, Jon Oberlander and Richard Tobin on "Extracting a Topic Specific Dataset from a Twitter Archive".



The third day started early at 9:00 am with sessions about digital humanities, in which I presented my study about “Detecting Off-Topic Pages in Web Archives”. The paper investigate different methods for automatically detecting when an archived page goes off-topic. It presented six different methods that mainly depend on comparing the archived copy of a page (a memento) with the first memento of this page. Testing the methods was done on different Archived collections from Archive-It. The suggested best method was a combination between a textual method (cosine similarity using TF-IDF) and a structural method (word count). The best combined methods for detecting the off-topic pages gave an average precision 0.92 on 11 different collections. The output of this research is a tool for detecting the off-topic pages in the archive. The code can be downloaded and tested from Github, and more information can be found from my recent presentation at the Internet Archive.


 

The next paper presented in the digital humanities session was "Supporting Exploration of Historical Perspectives across Collections" by Daan Odijk. In their work, Odjjk et al. introduced tools for selecting, linking, and visualizing the second World War (WWII) collection from collections of the NIOD, the National Library of the Netherlands, and Wikipedia. They also link digital collections via implicit events, i.e. if two articles are close in time and similar in content, they are considered to be related. Furthermore, they provided exploratory Interface to explore the connected collections. They used Manhattan distance for textual similarity over document terms in a TF.IDF weighted vector space and measured temporal similarity using a Gaussian decay function. They found that textual similarity performed better than temporal similarity, and combining textual and temporal similarity improved the nDCG score.

The third paper entitled "Impact Analysis of OCR Quality Tasks in Digital Archives" presented by Myriam C. Traub. Traub et al. performed user studies on digital archived to classify the research tasks and describe the potential impact of OCR quality on these tasks through interviewing scholars from digital humanities. They analyzed the questions and categorized the research tasks. Myriam said that few scholars could quantify the impact of OCR errors on their own research tasks. They found that OCR is unlikely to be perfect. They could not find solutions but they could suggest strategies that lead to the solutions. At the end, Myriam suggested that the tools should be open-source and there should be evaluation matrices.


 
At the end, I attended the last Keynote speech by Costis Dallas – "The post-repository era: scholarly practice, information and systems in the digital continuum", which was about on digital humanists' practices in the age of curation. Then the conference ended with the closing sessions, in which they announced the TPDL 2016 in Hannover, Germany.

After the conference, Sawood and I took the train from Poznan to Potsdam, Germany to meet Dr. Michael A. Herzog, the Vice Dean for Research and Technology Transfer, Department of Economics and head of Research Group SPiRIT. We were invited to talk about our research in a Digital Preservation lecture at Magdeburg-Stendal University of Applied Sciences in Magdeburg. Sawood wrote a nice blog post about our talks.



---
Yasmin

Monday, September 21, 2015

2015-09-21: InfoVis Spring 2015 Class Projects

In Spring 2015, I taught Information Visualization (CS 725/825) for MS and PhD students.  This time we used Tamara Munzner's Visualization Analysis & Design textbook, which I highly recommend:
"This highly readable and well-organized book not only covers the fundamentals of visualization design, but also provides a solid framework for analyzing visualizations and visualization problems with concrete examples from the academic community. I am looking forward to teaching from this book and sharing it with my research group."
—Michele C. Weigle, Old Dominion University
I also tried a flipped-classroom model, where students read and answer homework questions before class so that class time can focus on discussion, student presentations, and in-class exercises. It worked really well -- students liked the format, and I didn't have to convert a well-written textbook into Powerpoint slides.

Here I highlight a couple of student projects from that course.  (All class projects are listed in my InfoVis Gallery.)

Chesapeake Bay Currents Dataset Exploration
Created by Teresa Updyke


Teresa is a research scientist at ODU's Center for Coastal Physical Oceanography (CCPO). This visualization (currently available at http://www.radarops.comoj.com/CS725/project/index.html) gives a view of the metadata related to the high-frequency radar data that the CCPO collects. For all stations, users can investigate the number of data files available, station count, vector count, and average speed of the currents. The map allows users to select one of the three stations and further investigate the radial count and type collected on each day. This visualization aids researchers in quickly determining the quality of data collected at specific times and in identifying interesting areas for further investigation.

The thing I really liked about this project was that it solved a real problem and will help Teresa to do her job better. I asked Teresa how researchers previously determined what data was available.  Her reply: "They called me, and I looked it up in the log files."


In and Out Flow of DoD Contracting Dollars
Created by Kayla Henneman and Swaraj Wankhade



This project (currently available at http://kaylamarie0110.github.io/infovis/project.html) is a visualization of the flow of Department of Defense (DoD) contracting dollars to and from the Hampton Roads area of Virginia. The system is for those who wish to analyze how the in- and out-flow of DoD contracting dollars affects the Hampton Roads economy. The visualization consists of an interactive bubble map which shows the flow of DoD contracting dollars to and from Hampton Roads based on counties, along with line charts which show the total amount of inflow and outflow dollars.  Hovering over a county on the map shows the inflow and outflow amounts for that county over time.



Federal Contracting in Hampton Roads
Created by Valentina Neblitt-Jones and Shawn Jones


This project (currently available at http://shawnmjones.github.io/hr-contracting/app/index.html) is a visualization for US federal government contracting awards in the Hampton Roads region of Virginia. The visualization consists of a choropleth map displaying different colors based on the funding each locality receives. To the right of the map is a bar chart indicating how much funding each industry received. On top of the map and the bar chart is a sparkline showing the trend in funding. The visualization allows the user to select a year, agency, or locality within the Hampton Roads area and updates the choropleth, bar chart, and sparkline as appropriate.


-Michele

Thursday, September 10, 2015

2015-09-10: CDXJ: An Object Resource Stream Serialization Format

I have been working on an IIPC funded project of profiling various web archives to summarize their holdings. The idea is to generate statistical measures of the holdings of an archive under various lookup keys where a key can be a partial URI such as Top Level Domain (TLD), registered domain name, entire domain name along with any number of sub-domain segments, domain name and a few segments from the path, a given time, a language, or a combination of two or more of these. Such a document (or archive profile) can be used answer queries like "how many *.edu Mementos are there in a given archive?", "how many copies of the pages are there in an archive that fall under netpreserve.org/projects/*", or "number of copies of *.cnn.com/* pages of 2010 in Arabic language". The archive profile can also be used to determine the overlap between two archives or visualize their holdings in various ways. Early work of this research was presented at the Internet Archive during the IIPC General Assembly 2015 and later it was published at:
Sawood Alam, Michael L. Nelson, Herbert Van de Sompel, Lyudmila L. Balakireva, Harihar Shankar, and David S. H. Rosenthal, Web Archive Profiling Through CDX Summarization, Proceedings of TPDL 2015.
One of many challenges to solve in this project was to come up with a suitable profile serialization format that has the following properties:
  • scalable to handle terabytes of data
  • facilitates fast lookup of keys
  • allows partial key lookup
  • allows arbitrary split and merge
  • allows storage of arbitrary number of attributes without changing the existing data
  • supports link data semantics (not essential, but good to have)
We were initially leaning towards JSON format (with JSON-LD for linked data) because it has wide language and tool support and it is expressive like XML, but less verbose than XML. However, in the very early stage of our experiments we realized that it has scale issues. JSON, XML, and YAML (a more human readable format) are all single root node document formats, which means a single document serialized in any of these formats can not have multiple starting nodes; they all must be children of a single root node. This means it has to be fully loaded in the memory which can be the bottleneck in the case of big documents. Although there are streaming algorithms to parse XML or JSON, they are slow and usually only suitable for cases when an action is to be taken while parsing the document as opposed to using them for frequent lookup of the keys and values. Additionally, JSON and XML are not very fault tolerant i.e., a single malformed character may result in making the entire document fail to be parsed. Also, because of the single root node, splitting and merging of the documents is not easy.

We also thought about using simple and flat file formats such as CSV, ARFF, or CDX (a file format used in indexing WARC files for replay). These flat formats allow sorting that can facilitate fast binary lookup of keys and the files can be split in arbitrary places or multiple files with the same fields (in the same order) can be merged easily. However, the issue with these formats is that they do not support nesting and every entry in them must have the same attributes. Additionally the CDX has limited scope of extension as all the fields are already described and reserved.

Finally, we decided to merge the good qualities from CDX and JSON to come up with a serialization format that fulfills our requirements listed above. We call it CDXJ (or CDX-JSON). Ilya Kreymer first introduced this format in PyWB, but there was no formal description of the format. We are trying to formally introduce it and make some changes that makes it extendable so that it can be utilized by the web archiving community as well as broader web communities. The general description of the format is, "a plain file format that stores key-value pairs per line in which the keys are strings that are followed by their corresponding value objects where the values are any valid JSON with the exception that the JSON block does not contain any new line characters in it (encoded newline "\n" is allowed)." Here is an example:
@context ["http://oduwsdl.github.io/contexts/arhiveprofiles"]
@id {"uri": "http://archive.org/"}
@keys ["surt_uri", "year"]
@meta {"name": "Internet Archive", "year": 1996}
@meta {"updated_at": "2015-09:03T13:27:52Z"}
com,cnn)/world - {"urim": {"min": 2, "max": 9, "total": 98}, "urir": 46}
uk,ac,rpms)/ - {"frequency": 241, "spread": 3}
uk,co,bbc)/images 2013 {"frequency": 725, "spread": 1}

Lines starting with @ sign signify special purpose keys and they make these lines to appear together at the top of the file when sorted. The first line of the above example with the key @context provides context to the keywords used in the rest of the document. The value of this entry can be an array of contexts or an object with named keys. In the case of an array, all the term definitions from all the contexts will be merged in the global namespace (resolving name conflicts will be the responsibility of the document creator) while in the case of a named object it will serve like the XML Namespace.

The second entry @id holds an object that identifies the document itself and established relationship with other documents such as a parent/sibling when it is split. The @keys entry specifies the name of the key fields in the data section as an array of the field names in the order they appear (such as the primary key name appears first then the secondary key, and so on). To add more information about the keys, each element of the @keys array can have an object. All the lines except the special keys (@-prefixed) must have the exact number of fields as described in the @keys entry. Missing fields in the keys must have the special placeholder character "-".

The @meta entries describe the aboutness of the resource and other metadata. Multiple entries of the same special keys (that start with an @ sign) should be merged at the time of consuming the document. Splitting them in multiple lines increases the readability and eases the process of updates. This means the two @meta lines can be combined in a single line or split into three different lines each holding "name", "year", and "updated_at" separately. The policy to resolve the conflicts in names when merging such entries should be defined per key basis as suitable. These policies could be "skip", "overwrite", "append" (specially for the values that are arrays), or some other function to derive new value(s).

The latter three lines are the data entries in which the first one starts with a key com,cnn)/world (which is the SURT for of the http://www.cnn.com/world) followed by a nested data structure (in JSON format) that holds some statistical distribution of the archive holdings under that key. The next line holds different style of statistics (to illustrate the flexibility of the format) for a different key. The last line illustrates a secondary key in which the primary keys is the SURT form of a URI followed by the a secondary key that further divides the distribution yearly.

Now, let's reproduce the above example in JSON-LD, YAML, and XML respectively for comparison:
{
  "@context": "http://oduwsdl.github.io/contexts/arhiveprofiles",
  "@id": "http://archive.org/",
  "meta": {
    "name": "Internet Archive",
    "year": 1996,
    "updated_at": "2015-09:03T13:27:52Z"
  },
  "surt_uri": {
    "com,cnn)/world": {
      "urim": {
        "min": 2,
        "max": 9,
        "total": 98
      },
      "urir": 46
    },
    "uk,ac,rpms)/": {
      "frequency": 241,
      "spread": 3
    },
    "uk,co,bbc)/images": {
      "year": {
        "2013": {
          "frequency": 725,
          "spread": 1
        }
      }
    }
  }
}
---
  @context: "http://oduwsdl.github.io/contexts/arhiveprofiles"
  @id: "http://archive.org/"
  meta: 
    name: "Internet Archive"
    year: 1996
    updated_at: "2015-09:03T13:27:52Z"
  surt_uri: 
    com,cnn)/world: 
      urim: 
        min: 2
        max: 9
        total: 98
      urir: 46
    uk,ac,rpms)/: 
      frequency: 241
      spread: 3
    uk,co,bbc)/images: 
      year: 
        2013: 
          frequency: 725
          spread: 1
<?xml version="1.0" encoding="UTF-8"?>
<profile xmlns="http://oduwsdl.github.io/contexts/arhiveprofiles">
  <id>http://archive.org/</id>
  <meta>
    <name>Internet Archive</name>
    <year>1996</year>
    <updated_at>2015-09:03T13:27:52Z</updated_at>
  </meta>
  <data>
    <record surt-uri="com,cnn)/world">
      <urim>
        <min>2</min>
        <max>9</max>
        <total>98</total>
      </urim>
      <urir>46</urir>
    </record>
    <record surt-uri="uk,ac,rpms)/">
      <frequency>241</frequency>
      <spread>3</spread>
    </record>
    <record surt-uri="uk,co,bbc)/images" year="2013">
      <frequency>725</frequency>
      <spread>1</spread>
    </record>
  </data>
</profile>

The WAT format, commonly used in the web archiving community also uses JSON fragments as values for each entry separately to deal with the single root document issue, but it does not restrict the use of new-line character. As a consequence, sorting the file line-wise is not allowed, which affects the lookup speed. In contrast, CDXJ files can be sorted (like CDX files) which allows binary search in the files on the disk and prove very efficient in lookup heavy applications.

We have presented the earlier thoughts to seek feedback on the CDXJ serialization format at Stanford University during IIPC GA 2015. The slides of the talk are available at:


Going forward we are proposing to split the syntax and semantics of the format in separate specifications where the overall syntax of the file is defined as a base format while further restrictions and semantics such as adding meaning to the keys, making certain entries mandatory, giving meaning to the terms, enforcing specific sort order and defining the scope of the usage for the document are described in a separate derived specification. This practice is analogous to the XML which defines the basic syntax of the format and other XML-based formats such as XHTML or  Atom add semantics to it.

A generic format for this purpose can be defined as Object Resource Stream (ORS) that registers ".ors" file extension and "application/ors" media type. Then CDXJ extends from that to add semantics to it (as described above) which registers ".cdxj" file extension and "application/cdxj+ors" media type.

Object Resource Stream (ORS)

The above railroad diagram illustrates the grammar of the ORS format. Every entry in this format acquires one line. Empty lines are allowed that should be skipped during the consumption of the file. Apart from the empty lines, every line starts with a string key, followed by a single-line JSON block as the value. The keys are allowed to have optional leading or trailing spaces (SPACE or TAB characters) for indentation, readability, or alignment purposes, but should be skipped when consuming the file. Keys can be empty strings which means values (JSON blocks) can be present without being associated with any keys. Quoting keys is not mandatory, but if necessary one can use double quotes for the purpose. Quoted string keys will preserve any leading or trailing spaces inside quotes. None of the keys or values are allowed to break the line (new lines should be encoded if any) as the line break starts a new entry. As mentioned before, it is allowed to have a value block without a corresponding key, but not the opposite. Since the opening square and curly brackets indicate the start of the JSON block, hence it is necessary to escape them (as well as the escape and double quote characters) if they appear in the keys, and optionally their closing pairs should also be escaped. An ORS parser should skip malformed lines and continue with the remaining document. Optionally the malformed entries can be reported as warnings.

CDXJ (or CDX-JSON)

The above railroad diagram illustrates the grammar of the CDXJ format. CDXJ is a subset of ORS as it introduces few extra restriction in the syntax that are not present in the ORS grammar. In the CDXJ format the definition of the key string is strict as it does not allow leading spaces before the key or empty string as the key. If there are spaces in the CDXJ key string, it is considered a compound key where every space separated segment has an independent meaning. Apart from the @-prefixed special keys, every key must have the same number of space separated fields and empty fields use the placeholder "-". CDXJ only allows a single SPACE character to be used as the delimiter between the parts of the compound key. It also enforces a SPACE character to separate the key from the JSON value block. As opposed to the ORS, CDXJ does not allow TAB character as the delimiter. Since the keys cannot be empty strings in CDXJ, there must be a non-empty key associated with every value in it. Additionally, the CDXJ format also prohibits empty lines. These restrictions are introduced in the CDXJ to encourage its use as sorted files to facilitate binary search on the disk. When sorting CDXJ files, byte-wise sorting is encouraged for greater interoperability (this can be achieved on Unix-like operating systems by setting an environment variable LC_ALL=C). On the semantics side CDXJ introduces optional @-prefixed special keys to specify metadata, the @keys key to specify the field names of the data entries, and the @id and the @context keys to provision linked-data semantics inspired by JSON-LD.

Applications

There are many applications where a stream of JSON block is being used or can be used. Some of the applications even enforce the single line JSON restriction and optionally prefix the JSON block with associated keys. However, the format is not formally standardized and it is often called JSON for the sake of general understanding. The following are some example applications of the ORS or CDXJ format:
  • Archive Profiler generates profiles of the web archives in CDXJ format. An upcoming service will consume profiles in the CDXJ format to produce a probabilistic rank ordered list of web archives with the holdings of a given URI.
  • PyWB accepts (and encourages the usage of) CDXJ format for the archive collection indexes and the built-in collection indexer allows generating CDXJ indexes.
  • MemGator is a Memento aggregator that I built. It can be used as a command line tool or run as a web service. The tool generates TimeMaps in CDXJ format along with Link and JSON formats. The CDXJ format response is sorted by datetime as the key and it makes it very easy and efficient to consume the data chronologically or using text processing tools to perform filtering based on partial datetime.
  • 200 million Reddit link (posts) corpus that I collected and archived recently (it will be made publicly available soon) in CDXJ format (where the key is the link id), while 1.65 billion Reddit comments corpus is available in a format that conforms ORS format (although it is advertised as series of JSON blocks delimited by new lines).
  • A very popular container technology Docker and a very popular log aggeragation and unification service Fluentd are using a data format that conforms the above described specification of ORS. Docker calls their logging driver JSON which actually generates a stream of single-line JSON blocks that can also have the timestamp prefix with nano-second precision as the key for each JSON block. Fluentd log is similar, but it can have more key fields as prefix to each line of JSON block.
  • NoSQL databases including key-value store, tuple store, data structure server, object database, and wide column store implementations such as Redis and CouchDB can use ORS/CDXJ format to import and export their data from and to the disk.
  • Services that provide data streams and support JSON format such as Twitter and Reddit can leverage ORS (or CDXJ) to avoid unnecessary wrapper around the data to encapsulate the under a root node. This will allow immediate consumption of the stream of the data as it arrives to the client, without waiting for the end of the stream.
In conclusion, we proposed a generic Object Resource Stream (ORS) data serialization format that is composed of single line JSON values with optional preceding string keys per line. For this format we proposed the file extension ".ors" and the media-type "application/ors". Additionally, we proposed a derivative format of ORS as CDXJ with additional semantics and restrictions. For the CDXJ format we proposed the file extension ".cdxj" and the media-type "application/cdxj+ors". The two formats ORS and CDXJ can be very helpful in dealing with endless streams of structured data such as server logs, Twitter feed, and key-value stores. These formats allow arbitrary information in each entry (like schema-less NoSQL databses) as opposed to the fixed-field formats such as spreadsheets or relational databases. Additionally, these formats are text processing tool friendly (such as sort, grep, and awk etc.) which makes them very useful and efficient for file based data store. We have also recognized that the proposed formats are already in use on the Web and have proved their usefulness. However, they are neither formally defined nor given a separate media-type.

--
Sawood Alam

Tuesday, September 8, 2015

2015-09-08: Releasing an Open Source Python Project, the Services That Brought py-memento-client to Life

The LANL Library Prototyping Team recently received correspondence from a member of the Wikipedia team requesting Python code that could find the best URI-M for an archived web page based on the date of the page revision. Collaborating with Wikipedia, Harihar Shankar, Herbert Van de Sompel, Michael Nelson, and I were able to create the py-mement-client Python library to suit the needs of pywikibot.

Over the course of library development, Wikipedia suggested the use of two services, Travis CI and Pypi, that we had not used before.  We were very pleased with the results of those services and learned quite a bit from the experience.  We have been using GitHub for years, and also include it here as part of the development toolchain for this Python project.

We present three online services that solved the following problems for our Python library:
  1. Where do we store source code and documentation for the long term? - GitHub
  2. How do we ensure the project is well tested in an independent environment?  - Travis CI
  3. Where do we store the final installation package for others to use? - Pypi
We start first with storing the source code.

GitHub

As someone who is concerned about the longevity of the scholarly record, I cannot emphasize enough how important it is to check your code in somewhere safe.  GitHub provides a wide variety of tools, at no cost, that allow one to preserve and share their source code.

Git and GitHub are not the same thing.  Git is just a source control system.  GitHub is a dedicated web site providing additional tools and hosting for git repositories.

Here are some of the benefits of just using Git (without GitHub):
  1. Distributed authoring - many people can work separately on the same code and commit to the same place
  2. Branching is built in, allowing different people to work on features in isolation (like unfinished support for TimeMaps)
  3. Tagging can easily be done to annotate a commit for release
  4. Many IDEs and other development tools support Git out of the box
  5. Ease of changing remote git repositories if switching from one site to another is required
  6. Every git clone is actually a copy of the master branch of the repository and all of its history, talk about LOCKSS!!!
That last one is important.  It means that all one needs to do is clone a git repository and they now have a local archive of that repository branch, with complete history, at the time of cloning.  This is in contrast to other source control systems, such as Subversion, where the server is the only place storing the full history of the repository.  Using git avoids this single point of failure, allowing us to still have a archival copy, including history, in the case that our git local server or GitHub goes away.


Here are some of the benefits of using GitHub:
  1. Collaboration with others inside and outside of the project team, through the use of pull requests, code review, and an issue tracker
  2. Provides a GUI for centralizing and supporting the project
  3. Allows for easy editing of documentation using Markdown, and also provides a wiki, if needed
  4. The wiki can also be cloned as a Git repository for archiving!
  5. Integrates with a variety of web services, such as Travis CI
  6. Provides release tools that allow adding of release notes to tags while providing compiled downloads for users
  7. Provides a pretty-parsed view of the code where quick edits can be made on the site itself
  8. Allows access from multiple Internet-connected platforms (phone, tablet, laptop, etc.)
  9. And so much more that we have not yet explored....
We use GitHub for all of these reasons and we are just scratching the surface.  Now that we have our source code centralized, how do we independently build and test it?

Travis CI

Travis CI provides a continuous integration environment for code. In our case, we use it to determine the health of the existing codebase.

We use it to evaluate code for the following:
  1. Does it compile? - tests for syntax and linking errors
  2. Can it be packaged? - tests for build script and linking errors
  3. Does it pass automated tests? - tests that the last changes have not broken functionality
Continuous integration provides an independent test of the code. In many cases, developers get code to work on their magic laptop or their magic network and it works for no one else. Continuous Integration is an attempt to mitigate that issue.

Of course, far more can be done with continuous integration, like publish released binaries, but with our time and budget, the above is all we have done thus far.

Travis CI provides a free continuous integration environment for code.  It easily integrates with GitHub.  In fact, if a user has a GitHub account, logging into Travis CI will produce a page listing all GitHub projects that they have access to. To enable a project for building, one just ticks the slider next to the desired project.

It then detects the next push to GitHub and builds the code based on the a .travis.yml file, if present in the root of the Git repository.

The .travis.yml file has a relatively simple syntax whereby one specifies the language, language version, environment variables, pre-requisite requirements, and then build steps.

Our .travis.yml looks as follows:

language: python
cache: # caching is only available for customers who pay
    directories:
        - $HOME/.cache/pip
python:
    - "2.7"
    - "3.4"
env:
    - DEBUG_MEMENTO_CLIENT=1
install: 
    - "pip install requests"
    - "pip install pytest-xdist"
    - "pip install ."
script:
    - python setup.py test
    - python setup.py sdist bdist_wheel
branches:
    only:
        - master

The language section tells Travis CI which language is used by the project. Many languages are available, including Ruby and Java.

The cache section allows caching of installed library dependencies on the server between builds. Unfortunately, the cache section is only available for paid customers.

The python section lists for which versions of Python the project will be built.  Travis CI will attempt a parallel build in every version specified here.  The Wikimedia folks wanted our code to work with both Python 2.7 and 3.4.

The env section contains environment variables for the build.

The install section runs any commands necessary for installing additional dependencies prior to the build.  We use it in this example to install dependencies for testing.  In the current version this section is removed because we now handle dependencies directly via Python's setuptools, but it is provided here for completeness.

The script section is where the actual build sequence occurs.  This is where the steps are specified for building and testing the code.   In our case, Python needs no compilation, so we skip straight to our automated tests before doing a source and binary package to ensure that our setup.py is configured correctly.

Finally, the branches section is where one can indicate additional branches to build.  We only wanted to focus on master for now.

There is extensive documentation indicating what else one can do with .travis.yml.

Once changes have have pushed to GitHub, Travis CI detects the push and begins a build.  As seen below, there are two builds for py-memento-client:  for Python 2.7 and 3.4.



Clicking on one of these boxes allows one to watch the results of a build in real time, as shown below. Also present is a link allowing one to download the build log for later use.


All of the builds that have been performed are available for review.  Each entry contains information about the the commit, including who performed the commit, as well as how long it took, when it took place, how many tests passed, and, most importantly, if it was successful.  Status is indicated by color:  green for success, red for failure, and yellow for in progress.


Using Travis CI we were able to provide an independent sanity check on py-memento-client, detecting test data that was network-dependent and also eliminating platform-specific issues.  We developed py-memento-client on OSX, tested it at LANL on OSX and Red Hat Enterprise Linux, but Travis CI runs on Ubuntu Linux so we now have confidence that our code performs well in different environments.
Closing thought:  all of this verification only works as well as the automated tests, so focus on writing good tests.  :)

Pypi

Finally, we wanted to make it straightforward to install py-memento-client and all of its dependencies:

pip install memento_client

Getting there required Pypi, a site that globally hosts Python projects (mostly libraries).  Pypi not only provides storage for built code so that others can download it, it also requires that metadata be provided so that others can see what functionality the project provides.  Below is an image of the Pypi splash page for the py-memento-client.


Getting support for Pypi and producing the data for this splash page required that we use Python setuptools for our build. Our setup.py file, inspired by Jeff Knupp's "Open Sourcing a Python Project the Right Way", provides support for a complete build of the Python project.  Below we highlight the setup function that is the cornerstone of the whole build process.

setup(
    name="memento_client",
    version="0.5.1",
    url='https://github.com/mementoweb/py-memento-client',
    license='LICENSE.txt',
    author="Harihar Shankar, Shawn M. Jones, Herbert Van de Sompel",
    author_email="prototeam@googlegroups.com",
    install_requires=['requests>=2.7.0'],
    tests_require=['pytest-xdist', 'pytest'],
    cmdclass={
        'test': PyTest,
        'cleanall': BetterClean
        },
    download_url="https://github.com/mementoweb/py-memento-client",
    description='Official Python library for using the Memento Protocol',
    long_description="""
The memento_client library provides Memento support, as specified in RFC 7089 (http://tools.ietf.org/html/rfc7089)
For more information about Memento, see http://www.mementoweb.org/about/.
This library allows one to find information about archived web pages using the Memento protocol.  It is the goal of this library to make the Memento protocol as accessible as possible to Python developers.
""",
    packages=['memento_client'],
    keywords='memento http web archives',
    extras_require = {
        'testing': ['pytest'],
        "utils": ["lxml"]
    },
    classifiers=[

        'Intended Audience :: Developers',

        'License :: OSI Approved :: BSD License',

        'Operating System :: OS Independent',

        'Topic :: Internet :: WWW/HTTP',
        'Topic :: Scientific/Engineering',
        'Topic :: Software Development :: Libraries :: Python Modules',
        'Topic :: Utilities',

        'Programming Language :: Python :: 2.7',
        'Programming Language :: Python :: 3.4'
    ]
)

Start by creating this function call to setup, supplying all of these named arguments.  Those processed by Pypy are name, version, url, license, author, download_url, description, long_description, keywords, and classifiers.  The other arguments are used during the build to install dependencies and run tests.

The name and version arguments are used as the title for the Pypi page.  They are also used by those running pip to install the software.  Without these two items, pip does not know what it is installing.

The url argument is interpreted by Pypi as Home Page and will display on the web page using that parameter.

The license argument is used to specify how the library is licensed. Here we have a defect, we wanted users to refer to our LICENSE.txt file, but Pypi interprets it literally, printing License: LICENSE.txt.  We may need to fix this.

The author argument maps to the Pypi Author field and will display literally as typed, so commas are used to separate authors.

The download_url argument maps to the Pypi Download URL field.

The description argument becomes the subheading of the Pypi splash page.

The long_description argument becomes the body text of the Pypi splash page.  All URIs become links, but attempts to put HTML into this field produced a spash page displaying HTML, so we left it as text until we required richer formatting.

The keywords argument maps to the Pypi Keywords field.

The classifiers argument maps to the Pypi Categories field.  When choosing classifiers for a project, use this registry.  This field is used to index the project on Pypi to make finding it easier for end user.

For more information on what goes into setup.py, check out "Packaging and Distributing Projects" and "The Python Package Index (PyPI)" on the Python.org site.

Once we had our setup.py configured appropriately, we had to register for an account with Pypi.  We then created a .pypirc file in the builder's home directory with the contents shown below.

[distutils]
index-servers =
    pypi

[pypi]
repository: https://pypi.python.org/pypi
username: hariharshankar
password: <password>

The username and password fields must both be present in this file. We encountered a defect while uploading the content whereby the setuptools did not prompt for the password if it was not present and the download failed.

Once that is in place, use the existing setup.py to register the project from the project's source directory:

python setup.py register

Once that is done, the project show up on the Pypi web site under the Pypi account. After that, publish it by typing:

python setup sdist upload

And now it will show up on Pypi for others to use.

Of course, one can also deploy code directly to Pypi using Travis CI, but we have not yet attempted this.

Conclusion


Open source development has evolved quite a bit over the last several years.  The first successful achievement being sites such as Freshmeat (now defunct) and SourceForge, providing free repositories and publication sites for projects.  GitHub fulfills this role now, but developers and researchers need more complex tools.

Travis CI, coupled with good automated tests, allows independent builds, and verification that software works correctly.  It ensures that a project not only compiles for users, but also passes functional tests in an independent environment.  As noted, one can even use it to deploy software directly.

Pypi is a Python-specific repository of Python libraries and other projects.  It is the backend of the pip tool commonly used by Python developers to install libraries.  Any serious Python development team should consider the use of Pypi for hosting and providing easy access to their code.

Using these three tools, we not only developed py-memento-client in a small amount of time, but we also independently tested, and published that library for others to enjoy.

--Shawn M. Jones
Graduate Research Assistant, Los Alamos National Laboratory
PhD Student, Old Dominion University