Bibliography

I just finished spending two and a half hours formatting my current list of references in the Harvard style, a requirement for my dissertation.

I think I might as well do this for every paper I think is relevant as I find it, so at least I don’t need to go through such a lengthy exercise again. I will post the list here and keep updating it as time goes by (sorted in alphabetical order of the title):

  • Plimmer, B. (2010) “A comparative Evaluation of Annotation Software for Grading Programming Assignments” Proceedings of the 11th Australasian User Interface Conference, Brisbane, Australia: Australian Computer Society, Inc., pp. 14 – 22
  • Plimmer, B. & Mason, P. (2006) “A Pen-based Paperless Environment for Annotating and Marking Student Assignments” Proceedings of the 7th Australasian User interface conference – Volume 50, Hobart, Australia: Australian Computer Society, Inc., pp. 37 – 44
  • Kamin, S. et al (2008) “A System for Developing Tablet PC Applications for Education” Proceedings of the 39th SIGCSE technical symposium on Computer science education, Urbana, IL, USA: ACM, pp. 422-426
  • Huang, A. (2003) Ad-hoc Collaborative Document Annotation on a Tablet PC [Online] Brown University. Available from: Liverpool Library (Accessed: 6 April 2010)
  • Ramachandran, S. & Kashi, R. (2003) “An architecture for ink annotations on web documents” Proceedings of the Seventh International Conference on Document Analysis and Recognition – Volume 1, IEEE Computer Society, pp. 256
  • Johnson, P. (1994) “An instrumented approach to improving software quality through formal technical review” Proceedings of the 16th international conference on Software engineering Sorrento, Italy: IEEE Computer Society Press, pp. 113 – 122
  • Marshall, C. (1997) “Annotation: from paper books to the digital library” Proceedings of the second ACM international conference on Digital libraries Philadelphia, Pennsylvania, United States: ACM, pp. 131 – 140
  • Schilit, B., Golovchimlq, G., & Price, M. (1998) “Beyond paper: supporting active reading with free form digital ink annotations” Proceedings of the SIGCHI conference on Human factors in computing systems Los Angeles, California, United States: ACM Press/Addison-Wesley Publishing Co., pp. 249 – 256
  • Chen, X. & Plimmer, B. (2007) “CodeAnnotator: Digital Ink Annotation within Eclipse” Proceedings of the 19th Australasian conference on Computer-Human Interaction: Entertaining User Interfaces Adelaide, Australia: ACM, pp. 211 – 214
  • Myers, D. et al (2004) “Developing marking support within Eclipse” Proceedings of the 2004 OOPSLA workshop on eclipse technology eXchange Vancouver, British Columbia, Canada: ACM, pp. 62 – 66
  • Wolfe, J. (2000) “Effects of annotations on student readers and writers
    Improving Software Quality” Proceedings of the fifth ACM conference on Digital libraries San Antonio, Texas, United States: ACM, pp. 19 – 26
  • Plimmer, B. (2008) “Experiences with digital pen, keyboard and mouse usability” Journal on Multimodal User Interfaces, 2(1), July 2009, pp. 1783-7677
  • Plimmer, B. et al (2010) “iAnnotate: Exploring Multi-User Ink Annotation in Web Browsers” 11th Australian User Interface Conference Brisbane, Australia: Cprit/ACM, vol 106
  • Plimmer, B. et al (2006) “Inking in the IDE: Experiences with Pen-based Design and Annotation” Proceedings of the Visual Languages and Human-Centric Computing IEEE Computer Society, pp. 111 – 115
  • Cheng, S. et al (2008) “Issues of Extending the User Interface of Integrated Development Environments” Proceedings of the 9th ACM SIGCHI New Zealand Chapter’s International Conference on Human-Computer Interaction: Design Centered HCI Wellington, New Zealand: ACM, pp. 23-30
  • Golovchinsky, G. & Demoue, L. (2002) “Moving markup: repositioning freeform annotations” Proceedings of the 15th annual ACM symposium on User interface software and technology Paris, France: ACM, pp. 21 – 30
  • Heinrich, E. & Lawn, A. (2004) “Onscreen Marking Support for Formative Assessment” Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2004 Chesapeake, VA: AACE, pp.  1985-1992
  • Moran, T., Chiu, P., & Melle, W. (1997) “Pen-based interaction techniques for organizing material on an electronic whiteboard” Proceedings of the 10th annual ACM symposium on User interface software and technology Banff, Alberta, Canada: ACM, pp. 45 – 54
  • Penmarked Comparative Evaluation
  • Simon, B. et al (2004) “Preliminary experiences with a tablet PC based system to support active learning in computer science courses” ACM SIGCSE Bulletin 36 (3), September, pp. 213 – 217
  • Priest, R. & Plimmer, B. (2006) “RCA: Experiences with an IDE Annotation Tool” Proceedings of the 7th ACM SIGCHI New Zealand chapter’s international conference on Computer-human interaction: design centered HCI Christchurch, New Zealand: ACM, pp. 53 – 60
  • Bargeron, D.,  & Moscovich, T. (2003) “Reflowing Digital Ink Annotations” Proceedings of the SIGCHI conference on Human factors in computing systems Ft. Lauderdale, Florida, USA: ACM, pp. 385 – 393
  • Brush, A. et al (2001) “Robust annotation positioning in digital documents” Proceedings of the SIGCHI conference on Human factors in computing systems Seattle, Washington, United States: ACM, pp. 285 – 292
    Spatial recognition and grouping of text and graphics
  • Mock, K. (2004) “Teaching with Tablet PCs” Journal of Computing Sciences in Colleges 20 (2), December, 17 – 27
  • Marshall, C (1998) “Toward an ecology of hypertext annotation” Proceedings of the ninth ACM conference on Hypertext and hypermedia : links, objects, time and space—structure in hypermedia systems: links, objects, time and space—structure in hypermedia systems Pittsburgh, Pennsylvania, United States: ACM, pp. 40 – 49
  • Chatti, M. A. et al (2007) “u-Annotate: an application for user-driven freeform digital ink annotation of e-learning content” Sixth IEEE International Conference on Advanced Learning Technologies Kerkrade, The Netherlands: icalt, pp. 1039-1043
  • Perez-Quinones, M. & Turner, S. (2004) “Using a Tablet-PC to Provide Peer-Review Comments” Technical Report TR-04-17, Blacksburg VA: Virginia Tech.
  • Gehringer, E. et al (2005) “Using peer review in teaching computing” ACM SIGCSE Bulletin 37 (1), pp. 321 – 322
  • Koga, T. et al (2005) “Web Page Marker: a Web Browsing Support System based on Marking and Anchoring” Special interest tracks and posters of the 14th international conference on World Wide Web Chiba, Japan: ACM, pp. 1012 – 1013

Literature review posts:

Last updated: 30 May 2010

  • Plimmer, B. & Mason, P. (2006) “A Pen-based Paperless Environment for Annotating and Marking Student Assignments” Proceedings of the 7th Australasian User interface conference – Volume 50, Hobart, Australia: Australian Computer Society, Inc., pp. 37 – 44

  • Huang, A. (2003) Ad-hoc Collaborative Document Annotation on a Tablet PC [Online] Brown University. Available from: Liverpool Library (Accessed: 6 April 2010)

  • Ramachandran, S. & Kashi, R. (2003) “An architecture for ink annotations on web documents” Proceedings of the Seventh International Conference on Document Analysis and Recognition – Volume 1, IEEE Computer Society, pp. 256

  • Johnson, P. (1994) “An instrumented approach to improving software quality through formal technical review” Proceedings of the 16th international conference on Software engineering Sorrento, Italy: IEEE Computer Society Press, pp. 113 – 122

  • Marshall, C. (1997) “Annotation: from paper books to the digital library” Proceedings of the second ACM international conference on Digital libraries Philadelphia, Pennsylvania, United States: ACM, pp. 131 – 140

  • Schilit, B., Golovchimlq, G., & Price, M. (1998) “Beyond paper: supporting active reading with free form digital ink annotations” Proceedings of the SIGCHI conference on Human factors in computing systems Los Angeles, California, United States: ACM Press/Addison-Wesley Publishing Co., pp. 249 – 256

  • Chen, X. & Plimmer, B. (2007) “CodeAnnotator: Digital Ink Annotation within Eclipse” Proceedings of the 19th Australasian conference on Computer-Human Interaction: Entertaining User Interfaces Adelaide, Australia: ACM, pp. 211 – 214

  • Myers, D. et al (2004) “Developing marking support within Eclipse” Proceedings of the 2004 OOPSLA workshop on eclipse technology eXchange Vancouver, British Columbia, Canada: ACM, pp. 62 – 66

  • Wolfe, J. (2000) “Effects of annotations on student readers and writers

  • Improving Software Quality” Proceedings of the fifth ACM conference on Digital libraries San Antonio, Texas, United States: ACM, pp. 19 – 26

  • Plimmer, B. et al (2006) “Inking in the IDE: Experiences with Pen-based Design and Annotation” Proceedings of the Visual Languages and Human-Centric Computing IEEE Computer Society, pp. 111 – 115

  • Cheng, S. et al (2008) “Issues of Extending the User Interface of Integrated Development Environments” Proceedings of the 9th ACM SIGCHI New Zealand Chapter’s International Conference on Human-Computer Interaction: Design Centered HCI Wellington, New Zealand: ACM, pp. 23-30

  • Golovchinsky, G. & Demoue, L. (2002) “Moving markup: repositioning freeform annotations” Proceedings of the 15th annual ACM symposium on User interface software and technology Paris, France: ACM, pp. 21 – 30

  • Heinrich, E. & Lawn, A. (2004) “Onscreen Marking Support for Formative Assessment” Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2004 Chesapeake, VA: AACE, pp. 1985-1992

  • Moran, T., Chiu, P., & Melle, W. (1997) “Pen-based interaction techniques for organizing material on an electronic whiteboard” Proceedings of the 10th annual ACM symposium on User interface software and technology Banff, Alberta, Canada: ACM, pp. 45 – 54

  • Penmarked Comparative Evaluation

  • Simon, B. et al (2004) “Preliminary experiences with a tablet PC based system to support active learning in computer science courses” ACM SIGCSE Bulletin 36 (3), September, pp. 213 – 217

  • Priest, R. & Plimmer, B. (2006) “RCA: Experiences with an IDE Annotation Tool” Proceedings of the 7th ACM SIGCHI New Zealand chapter’s international conference on Computer-human interaction: design centered HCI Christchurch, New Zealand: ACM, pp. 53 – 60

  • Bargeron, D., & Moscovich, T. (2003) “Reflowing Digital Ink Annotations” Proceedings of the SIGCHI conference on Human factors in computing systems Ft. Lauderdale, Florida, USA: ACM, pp. 385 – 393

  • Brush, A. et al (2001) “Robust annotation positioning in digital documents” Proceedings of the SIGCHI conference on Human factors in computing systems Seattle, Washington, United States: ACM, pp. 285 – 292

  • Spatial recognition and grouping of text and graphics

  • Mock, K. (2004) “Teaching with Tablet PCs” Journal of Computing Sciences in Colleges 20 (2), December, 17 – 27

  • Marshall, C (1998) “Toward an ecology of hypertext annotation” Proceedings of the ninth ACM conference on Hypertext and hypermedia : links, objects, time and space—structure in hypermedia systems: links, objects, time and space—structure in hypermedia systems Pittsburgh, Pennsylvania, United States: ACM, pp. 40 – 49

  • Perez-Quinones, M. & Turner, S. (2004) “Using a Tablet-PC to Provide Peer-Review Comments” Technical Report TR-04-17, Blacksburg VA: Virginia Tech.

  • Gehringer, E. et al (2005) “Using peer review in teaching computing” ACM SIGCSE Bulletin 37 (1), pp. 321 – 322

  • Koga, T. et al (2005) “Web Page Marker: a Web Browsing Support System based on Marking and Anchoring” Special interest tracks and posters of the 14th international conference on World Wide Web Chiba, Japan: ACM, pp. 1012 – 1013

Review patches or full code?

Having been living in the real world for many years I’ve assumed that code review means review of patches. I’ve hardly ever witnessed reviews of entire codebases. But now it’s just occurred to me that many people in the industry have no experience with patches – including those who only worked with tools like Visual Studio and those who do web programming directly on servers (no version control).

If the participants in my study have no experience with patches – looking at a graphical diff will be the same as looking at the structure of a complex molecule for them, and in effect their participation will be useless for my study. I am now considering only using full code (e.g. addition of one or more functions) in my study of code review, given that I can decide the results would be equally valid if a view of a patch were used.

Code review using tablets – literature review (3)

I’ve completed what I’m going to call the “primary” literature review for my project. I printed all the relevant papers I found to date and in the last two days I read and annotated the most immediately relevant of those – authored by Beryl Plimmer et al. I mentioned them in previous posts. I’ll post some of my notes here also.

A Pen-based Paperless Environment for Annotating and Marking Student Assignments” (2006):

  • An advantage of doing reviews electronically is that it can be done remotely. Paper allows free-form, unstructured, unrestricted, expression and that’s difficult with a simple textbox on a webpage, but a tablet allows for the best of both worlds.
  • An interesting point was mentioned that got me thinking: using different colours of ink for different types of statements (positive/negative, low/high severity) turned out to be unusable – very error prone. I will probably not offer a way to categorise freehand comments, but this is something worth keeping in mind if I get ideas in the future.
  • Think-aloud was also a failure, authors attributed it to that code review takes a lot of concentration and saying what you’re thinking breaks that concentration. As a result – no any useful information was gathered from the talk-while-you-do-the-review process.
  • The participants didn’t have previous experience with using a tablet PC but figured it out quickly. I have to make sure I come up with some exercise for my participants.
  • The lack of a horizontal scrollbar was mentioned as a problem brought up by the participants. I have not even considered how scrolling works when you’re using a stylus. Something to keep in mind.
  • A couple of participants suggested that colour syntax highlighting would have made the code easier to read. This will probably not work for me since I’ll be working with patches, but maybe.
  • “Text-entry to the assignments via the on-screen keyboard proved to be tedious, so an external keyboard was added during the first participant’s session […] We noted that markers used the pen and the keyboard simultaneously”. Hm. This is likely something I will run into also. Typing is much quicker than handwriting, quicker than talking even. So when the reviewer wants to provide a paragraph of comments they will certainly want a keyboard.
  • “they had initially tried erasing annotations with the back of the pen”. I’m not sure I would have, but erasing annotations is important, and whether it works with the back of the pen depends on the hardware. Have to keep this in mind too.
  • Here and in one other paper the Word review functionality was mentioned. Another paper (can’t remember which one) lists the advantages and disadvantages of that. I should probably have a similar list.
  • Because of implementation troubles one computer ran the review software and another had the IDE (the second probably to build/run the code). Probably won’t apply to me.

“RCA: Experiences with an IDE Annotation Tool” (2006):

  • When the underlying layout changes (e.g. font size, zoom) the review comments need to stay in place. This was mentioned in a few other places as “reflowing annotations”, and I thought it didn’t apply to me because the code I’m concerned with will not change. It will probably be a diff that will stay in the system forever whether it’s checked in or altered or not. But if I’m to integrate with ReviewBoard – the user will certainly be able to change the font size. Maybe for my study I can not care about this.
  • Another vote for the ability to modify the review comments, here including not only erasing but selecting, moving, and recolouring. Eeee.. too fancy for my project, probably.
  • It’s not clear what the final implementation was, but I think their ideal would have been a transparent canvas for the ink overlaid on the code editor. I’m not sure this is possible to do with the HTML canvas, but I expect I can just render whatever used to be rendered into the browser window into the canvas instead. Thankfully I don’t need to support editing of the code. Fingers crossed.
  • A system similar to the Visual Studio breakpoints was used in RCA to choose a severity for the comment. They didn’t mention here whether it was used/useful or not. I’m not sure whether this paper is newer or older than the one I mentioned above.

“CodeAnnotator: Digital Ink Annotation within Eclipse” (2007):

  • Nothing new here, but I’ve been kindly offered a chance to review the source code, might learn some useful things from it.

“Issues of Extending the User Interface of Integrated Developmnet Environments”:

  • “size and average complexity of projects has grown, the traditional way of reviewing code is no longer feasible”. I wonder if this is in reference to reviewing entire codebases, rather than specific changes as listed in a patch file. I think I will ignore this problem, assume that a diff is still the primary method of reviewing changes because it works well.

“A comparative Evaluation of Annotation Software for Grading Programming Assignments” (2010):

  • There’s a mention of prior work that found “Ink annotations are rich and expressive due to their free-format”, and “Ink annotations also have a part to play in supporting active reading”. I have to read these two papers to see if any of it has data useful for my study.
  • An adapted means of categorising comments as  “either a tick or cross, comment, grade, or other” was used. Given that this paper is a few years newer, perhaps this was found to be a good solution to the difficulties of categorising mentioned in the papers quoted above.
  • Latin squares arrangement” – something I might want to use.
  • This study was run on 600 assignments from 200 students, with real TAs giving real comments and marks. The data resulting from that is very useful and interesting:
    • Paper comments are much more useful than database (form field) comments, but grading on paper was universally disliked because it’s so cumbersome. 70% of the students never collected the marked papers.
    • In the TAs feedback on the comparison of paper with the tablet the digital eraser was voted one of the biggest advantages.
    • It did not take any longer to mark using the tablet than using either paper or the database method. This is encouraging, and something I hope to confirm in my study.
    • Overall the tablet method was found to be the best of the three.

There is so much interesting stuff in these papers I almost feel like starting to add that stuff as notes to my own, but I think it will be more prudent to just make notes on the side and in the blog for now, until I make some final decisions about what my study is going to look like.

Ethics

I will have to get the ethics review board’s approval before I can do my experiment. Sounds like it’s too early to think about that – but I hear that process can take some time, and I don’t need to be delayed for a stupid reason like that.

Still, I can’t fill out the form until I have some more details about my project – especially who the subjects are and what questions exactly I’ll be asking them. And I have to figure out how I’m going to find the participants. So for now I’m just making sure I see all the roadblocks and delays before I get to them.

One piece of good news is that I won’t need to do the same at Liverpool, at least this suggests that I don’t:

The remit of the Committee on Research Ethics is research projects involving
research on human participants, or human tissues, or databases of personal  information to be carried out by University staff or students on University premises, or at any other location where there is no other acceptable provision for ethical consideration.

One interesting part in a fellow’s submission to the ethics board said that (1) the data is anonymized and (2) even that is to be destroyed after 5 years (erm…).

It makes perfect sense that the data should be anonymized but that will make it difficult to use version control for the project. I am a big fan of Subversion. Perhaps I will keep the raw data in a password-protected zip file checked into svn (or something even more secure).

Destroying data gathered during a scientific experiment seems like heresy to me. I would really rather not do this, even though noone may ever look for the source of my conclusions. But perhaps there are reasons to get rid of it that I don’t know.

Potential experiment – attempt #1

One of the things I can do now is try and define what my experiments with people will look like. To perform these experiments I would need to get approval from the UofT ethics board, and to get that approval I need to fill out an application describing what sort of information I collect from whom under what circumstances.

This is the first attempt. Let’s assume for the purpose of this excercise that the software will be an extension of Review Board.

This will give me potential access to the Basie project (my fellows Zuzel and Mike Conley are working with it), a bunch of open source projects and perhaps even a business or two of those listed on the Review Board site, and whomever else I can find.

Greg mentioned I need to be in physical proximity to the subjects, I forgot to ask why – perhaps it’s not necessary for the experiment the way I define it. The most likely candidate is Basie.

I would very much prefer to have a controlled experiment, but in a real environment. Which means that review board has to be already actively used in the project I’m looking at, and at least several reviews per week are done. This may be a challenge, but I’ll work with the assumption that I can find such a project.

Oh, the bigger problem is that the reviewers need tablets (using a mouse will be useless). This is probably what Greg meant.

I will have N reviewers participationg. For this description I will only mention one, and I’ll call this person Roy the reviewer. I will give Roy a tablet and a URL to 4 code submissions:

  • Two simple changes (something like a new 2nd year undergrad hello world function)
  • Two complex changes (maybe database access, or more than one file affected, or changes spread wide across one file)

Each of these will have the same number of problems, but they will be different so that one review doesn’t affect the other.

I will ask Roy to review one of the first and one of the second normally, by typing in comments. This would tell me:

  1. What number of substantively different comments they provided
  2. How many of those are of what type (design flaw, bug, usability problem, style issue)
  3. How much time was spent on the review

Then I will ask Roy to try the stylus, to have him get used to using it as a pen, then to use the stylus to add review comments either inline or on the side of the code they’re talking about, treating the monitor as a printout.

Then I would compare the results.

Using a stylus should not be a problem. But experienced reviewers have a system developed for providing good feedback using the oldschool method, so I have to account for that somehow. Perhaps a few days later I can repeat the experiment and see if their use of the new system is any different.

Do this with N people, and perhaps some statistically significant results can be deduced.

Code review using tablets – literature review (2)

Today I hope to find all (yeah, I know) previous work related to tablets and code review. I’m not going to read all of it, but all I need at this point is a good idea of what’s been done so far – that will help me understand how much previous work I can rely on and whether this project is original enough as I have defined it.

I’m also starting to save local copies of all these papers, I have a feeling getting them via the library channel is going to be too painful long-term.

“Reflowing Digital Ink Annotations” is about the mechanics of keeping freeform annotations useful even if the text is edited. Looks like serious work. Referenced from “CodeAnnotator: Digital Ink Annotation within Eclipse” – it’s a must read if an implementation needing such functionality is to be created. I don’t expect such functionality will be available in my project, that’s out of scope for me.

“Annotation: from paper books to the digital library”, “Toward an ecology of hypertext annotation”, “Developing marking support within Eclipse”, “An Architecture for Ink Annotations on Web Documents”, “Beyond paper: supporting active reading with free form digital ink annotations”, “Spatial recognition and grouping of text and graphics”, “Web Page Marker: a Web Browsing Support System based on Marking”, “Robust annotation positioning in digital documents”, “Preliminary experiences with a tablet PC based system to support active learning in computer science courses”, “Onscreen marking support for formative assessment”, “Ad-hoc Collaborative Document Annotation on a Tablet PC”, “Improving Software Quality” are not very interesting but I may want to look at them later for more basic studies.

“Teaching with Tablet PCs” is nearly off-topic but they used an interesting system – the annotations were done using a “virtual transparency” over the desktop – an interesting idea.

“Effects of Annotations on Student Readers and Writers” is also irrelevant but talks about an interesting topic – how annotated material influences subsequent readers.

“Pen-based interaction techniques for organizing material on an electronic whiteboard” mentions another technology I haven’t yet considered – electronic whiteboards. I should read this one, to see what issues they came up with.

“An instrumented approach to improving software quality through formal technical review” looks like a serious study of code review, mentions something particularly curious: “a great deal of expensive human technical resources. For example, a recent study documents that a single code inspection of a 20 KLOC software system consumes one person-year of effort by skilled technical staff.”. Wow. I have to read all of this to see what problems they uncovered and whether tablets can help solve any of them.

There is more of course, and depending on how wide and deep I want to take the literature review – I can spend a year on it. For now this should do.

My exasperation at the end of yesterday’s post was not well placed. Except for the unfortunate fact that I won’t be the first to implement such a system – there is definitely room for more research. And very little work (almost none, really) has been done to put code review and tablet annotations together.

Code review using tablets – literature review (1)

I looked for existing work and opinion on code review using tablets today.

I started by looking for people who do what I do at work – print out the code to be reviewed. It looks like I’m not very unusual in thinking this is a worthwhile thing to do, in fact the practice seems almost common. But I haven’t felt great love for printouts out there.

Gonsalves and Murthy (2001) say a couple of interesting things:

The reviewer should not try to fix any bugs or improve the code. S/he should merely inform the programmer about potential bugs and possible improvements.

2. Steps in Code Review
1. Obtain print-outs of the specs and design documents, and of the code.

That second suggestion is interesting. If for no other reason than that it shows reviewing code using a text editor is considered inefficient by some.

Some in the wild call printouts “a conventional techniqe” or “soooo 1976”. That doesn’t surprise me, in fact I’m surprised this is not a dominant opinion. Programmers love gadgets and fancy software.

Looking at tablet code review in particular I was glad to see there isn’t much. At first I thought this is a completely unexplored idea, but then I found “A System for Developing Tablet PC Applications for Education” (2008), guys who developed such an application. The software was not made especially for code review (it’s only one use case) and I’m not sure it was ever completed (“This application is under active development. We do not know what set of features will best facilitate discussion of
each student’s code.”). There’s a note Mike might find interesting:

Our department has a required class for juniors, entitled “Programming Studio,” in which students meet each week, in small groups, to review each other’s code [13].

“Panel: Using Peer Review in Teaching Computing” mentions “Using a Tablet-PC to Provide Peer-Review Comments”, a simple experiment which found (among other things) that:

In general, we found that the most natural medium for providing comments was the paper/pen. However, that medium also invited more editing-style com- ments, which were not necessarily appropriate for the task at hand.

Good, that’s good. “Inking in the IDE: Experiences with Pen-based Design and Annotation” is mostly about drawing diagrams, but they do mention code annotations and that unfortunately they don’t work because they’re not integrated with IDEs.

“Moving Markup: Repositioning Freeform Annotations” (2006) is not about code review, they developed a system to allow for annotating text on a tablet, and mention something I like:

Freeform digital ink annotation allows readers to interact with documents in an intuitive and familiar manner. Such marks are easy to manage on static documents, and provide a familiar annotation experience.

And here’s what I was afraid of, this concept has been not only considered but implemented multiple times. “CodeAnnotator: Digital Ink Annotation within Eclipse” (2007) mention Penmarked and RCA as “limited” solutions. Also they go over their own implementation:

Eclipse Digital Ink Annotation

Bah, I really hoped I would be the first to implement something like this. Oh well.

A Pen-based Paperless Environment for Annotating and Marking Student Assignments” (2006) describes Penmarked. Again – it looks like just an implementation, with no serious evaluation of its efficiency.

“RCA: Experiences with an IDE Annotation Tool” is about (surprise) RCA. Thankfully this is also only an implementation. These guys integrated it with Visual Studio. Doesn’t look like it’s gone beyond a research project.

I can’t take it any more, I will have to continue later. Maybe saturday. Just looking at the long list of references in the papers above makes me sick.

Code review using tablets – overview

Code review is a difficult subject. Not unlike unit tests, code review is overwhelmingly considered a good thing by those who write textbooks, or preach dogma, or create company policies; yet it has not been overwhelmingly accepted as a requirement in all organisations creating software.

I suspect that as with unit tests the stumbling block is the resources it takes – someone needs to spend X hours per week reviewing code. How can the effort required be minimised? If tablets were used to review code freehand (less typing, more drawing, easier expression) – would that encourage more reviews, or more feedback from reviewers?

Mike has been looking into code review tools for quite some time. He often mentions the Cisco Systems study. Making my life easier – he made a list of interesting readings, a great overview of prior work.

But none of that work concentrated on finding how much overhead the mechanics of code review are. To increase the frequency of occurrence of code review the cost has to be brought down – both in terms of time and in terms of energy required. I will spend some time this week looking for research more directly related to this issue.

Also – personal experience reminds me of my code only being seriously reviewed when I just started. The more senior programmers in my experience don’t have their changes reviewed at all, or only selective changes are reviewed. As a reviewer I found that it’s difficult to write down what I’m thinking. When marking assignments I had to print them out because drawing, underlining, circling, and writing a couple of words next to some line/symbol is much easier than trying to explain it in P.S. notes.

[I believe the best way to review code is to have both the author and reviewer sit next to each other and draw on a whiteboard/printout during the review, but that’s a different challenge I would not want to take on as a research problem.]

If it is found that tablet-based software can help with code reviews – that will be a thumbs-up for tablets, but more importantly – it will be a reason to develop software for traditional workstations, software that will encourage more people to do better reviews.

From a technical point of view – this would probably be implemented as part of ReviewBoard, using ReviewBoard for a backend and the Canvas HTML element to extend the ReviewBoard frontend.

A couple of freehand drawing on Canvas links: TinyDoodle, CanvasPaint

More “clickers are good” sugestions

Read another paper today, “Classroom Response and Communication Systems: Research Review and Theory“.

Yet again, it says none of the existing research findings are reliable enough:

A total of 26 studies report outcomes (listed in full paper). The most commonly reported outcomes are: greater student engagement (16 studies), increased understanding of subject matter (11), increased enjoyment of class (7), better group interaction (6), helping students gauge their own understanding (5), and teachers have better awareness of student difficulties (4)

This body of evidence, taken together, is suggestive of a real and important phenomenon at hand. However, none of the available studies rises to the present specification of “scientifically based research” that would allow inferences about causal relationships or that could form the basis for estimating the magnitude of the effect.

This one’s from 2004, so maybe it’s too old to matter. Or perhaps they’re looking for undue process. Is it unreasonable to expect someone to accept overwhelming consensus? Is it reasonable to expect proof in a complex problem space where mathematical proof is impossible? On the other hand – in every paper I read on this topic there seems to be a predisposition to praise clickers.

Perhaps if someone set off to find the problems with clickers rather than articulate yet another benefit – it would help companies fix those problems. Negative doesn’t mean bad.

Bureaucracy – answers

I got some answers on the questions I asked earlier.

Picking an applied topic for the dissertation is not ill-advised given that I find a DA who’s interested.

Using my employer as a sponsor is a really bad idea, unless they’re a sponsor in name only.

Liverpool owns my work in its entirety up until it’s been reviewed and rubber-stamped on completion. If I want to do the work in the open – that’s possible but I have to make sure my DA is ok with it.